• This is default featured slide 1 title

    Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by NewBloggerThemes.com.

  • This is default featured slide 2 title

    Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by NewBloggerThemes.com.

  • This is default featured slide 3 title

    Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by NewBloggerThemes.com.

  • This is default featured slide 4 title

    Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by NewBloggerThemes.com.

  • This is default featured slide 5 title

    Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by NewBloggerThemes.com.

Samsung Galaxy Fold 2 Vertical Folding

          Samsung Galaxy Fold 2 Vertical Folding

galaxy-fold-pliable
    
      The successor to the Galaxy Fold will fold vertically. It should be launched in early 2020.

     At the Samsung 2019 Developer Conference in San Jose, California, South Korean electronics giant Samsung launched a new version of the Galaxy Fold. The presentation video explains, "from the Galaxy Fold to the innovative new form factor," featuring a vertical folding version of Samsung's popular folding device. 

     The video shows a Galaxy Fold, which folds horizontally (like a book) into a longer vertical display.

     The successor to the Galaxy Fold will have the look of a classic smartphone but can fold vertically to a square shape, making it more portable.

                                                 Source: Samsung Newsroom

      Interestingly, a similar teaser published on Samsung's Korean YouTube page goes a little further, showing the Galaxy Fold and the new device side by side. It also shows the new device folded at 90 degrees when recording a video, suggesting that you can use the fold function to put the device in place while recording something.

     Finally, the video reveals that the new Galaxy Fold device will have a front perforated camera.

     By announcing a new foldable smartphone so quickly, Samsung confirms that it takes this technology very seriously and that it doesn't intend to settle for a single device reserved for a very small and wealthy clientele. 

     2020 should be an important year for the Korean company, which may well announce different smartphones of the genre by multiplying the designs. After the tablet capable of becoming a smartphone, here is now the smartphone capable of becoming a small square object.

     The new Galaxy Fold device is not clear. Let's hope Samsung has found a way to improve its foldable screen technology for this new device so that it doesn't suffer the same problems as its predecessor.

Share:

All about Drones, UAV

          The definition of the Drone

drone-dji
 
      A UAV is an unmanned aircraft, which can perform many complex tasks autonomously, such as take-off or landing, or navigation. Explicitly, a UAV is a semi-intelligent, remotely controlled system.

     If there is one technology that has been rapidly adopted by the general public, it is that of drones. Not a month goes by without these flying machines being in the news, and the diversity of applications is constantly surprising.

     UAVs have become a common leisure object, and the Frenchman Parrot, a pioneer in the field, initially enjoyed several great successes. However, he had to reckon with the appearance of competitors such as the Chinese DJI, the current world leader with best-sellers such as the Mavic range.

     With the featherweight Anafi, which captures 4K images, Parrot hopes to regain some color. However, some major players are preparing to enter the dance, such as Apple or Tesla, and we can imagine what their impact on this market would be.

          The range of services on offer is multiplying

      Some offer to film your wedding from the airplane, others to monitor crops such as livestock. Amazon and others, such as DHL, have been seriously considering parcel delivery for quite some time. In South Africa, they rely on these winged accessories to curb the infamy of poachers who prey on rhinos.

     According to the FAA, which regulates air traffic in the United States, seven million commercial drones will be sold in 2020, almost three times more than in 2016 (2.5 million)! In other words, we will have to get used to seeing a sky populated by these new types of flying objects.

     UAVs are not very recent, the first use of a combat UAV to start during the Second World War, it is called STAG-1 for "Special Task Air Group 1" of the "United State Navy", there are several types of UAVs, but we will only deal with 3 of them, medium altitude long endurance MALE, combat UAVs, and indoor UAVs.

     The MALE, it flies between 5000 and 10000 meters, it has a long endurance of 10 to 24 hours, their wingspan is between 10 and 20 meters, these models fly relatively slowly between 250 and 350 kilometers per hour, compared to an airliner which generally flies from 0.7 to 0.8 MAG, i.e. 900 km/h, in mission since the 90s, each UAV costs about the modest sum of 10 million euros.

     They are generally used for surveillance, target acquisition, and designation missions and also play a very important role as communication relays between different entities, such as satellites, air vehicles and command centers.

drone-planeur
   
      Combat UAVs, the United States has developed two demonstrators, the X-45C, which made its first flight in 2002, and the X47-B, but is it possible to have a UAV with the same performance as a traditional combat aircraft such as the Rafale or Mirage? on the one hand, it is possible to remove all the equipment necessary for a human pilot, which can lead to a reduction in weight and dimensions of about 40% compared to a fighter aircraft, with a pilot, on the other hand, it seems difficult to develop a control software as powerful as a fighter pilot.

     For example, the software must be able to identify, designate and track automatic targets, which is why governments currently favor these combat UAVs for basic operations such as the destruction of ground targets, tanks or anti-aircraft batteries.

    For safety reasons, the decision to fire is always subject to the judgment of an external pilot, which means that these UAVs are not totally autonomous, but future models will certainly be able to carry out much more complex missions in the near future.

     Indoor UAVs, these UAVs have become highly democratized in recent years, they are generally less than a meter long, and can carry a maximum of one kilogram of payload, whereas MALEs can embark and be controlled by powerful onboard computing units.

     These interior UAVs are subject to strong constraints in terms of on-board computing, a second important difficulty is their autonomy, most of the existing models are equipped with electric motors, because of the low payload, the batteries cannot generally be used for more than one hour.

     In the medium term, the batteries will have to be miniaturized, or the propulsion system will have to be modified. Because of their low weight, they have high reactivity, so they must be able to be controlled very frequently, at least 20 Hz, today most radio controls operate at 2.4 GHz.

     Their second constraint, and the low number of on-board sensors, due to their low payload, previously these UAVs were used very little in real missions, notably due to their low computing capacity. However, the processors under development make it possible to analyze data more and more quickly.

    A certain application of these UAVs will be the exploration, securing or inspection of buildings in times of urban warfare for example. Indeed the inspection of enemy buildings is a very high-risk mission, by sending a UAV inside the building, it would then be possible to retrieve a lot of information, such as the number of enemy fighters, hostages, the position of the shooters, or the configuration of spaces.
Share:

what happens to our personal data on the Internet?

          What happens to our personal data on the Internet?

privacy-data

      The use of our personal data, saving very sensitive and private, as part of the collection of Big Data, and something that affects us all, therefore directly and very intimately.

     If companies covet so much personal data, it's because it allows them to have a huge competitive advantage.

     Indeed, personal data will allow companies to know their customers better, and thus meet all their expectations thanks to information from various sources, whether on social networks, applications, web browsing or connected objects.

     We provide companies with extremely useful information. This can be for example name, first name, contact information, so quite classical information, but today it can even be much more precise, like buying behavior, real-time geolocation, and even all your little secrets! All this to allow companies to know us better, to better target their customers and thus to adapt their marketing strategy.

     Basically, this means that each customer will be offered a product that will meet all their expectations, so we're talking about the hyper-customization of offers, which is what Amazon has been doing for years thanks to Big Data technologies.

          The marketing of our private data

      Amazon uses an algorithm to personalize shopping suggestions as much as possible, whether by email or when you browse their site.

     In fact, these purchase suggestions will be based on your latest searches or previous purchases, and even on information from other users who have the same profile as you, and all of this allows Amazon to maximize your purchases because it's based on what you really want to buy, so it allows them to maximize their profits! And it's very efficient because it works very well.

 
commercialisation-données

      Today we are all facing a dilemma: we are torn between the desire to enjoy the benefits of big data technologies, and the concern for privacy, and this is all the more true in the case of so-called free services, for example, we all have a Gmail address, while we know that google used to scan all our emails to get information about us to offer us suitable ads on the search engine.

     And we agreed to this deal because, in terms of free email, there is hardly anything better than Gmail, whether in terms of functionality, storage space or handling.

The problem is that we were so used to giving our personal data for services, that it became very commonplace, and that today we hardly pay attention to personal data given to companies.

     For example, most of the applications are free, but in return, it asked you a lot of personal data, there was real-time geolocation, web browsing, reading your SMS and MMS, the IMEI number and access to your calendar and contacts...

     Of course, the company was reselling the personal data of its users. That's where we can make a difference. Already as a user, be vigilant about the personal data that applications ask for.

     Then, as a consumer, privilege companies that have a responsible, and appropriate use of personal data, and that have transparent communication, for example, if a VTC company, geolocates you in real-time, while the application is off, you switch to its competitor.

     Tip to keep track of your personal data, and know what the company holds as personal data about you. For each company you will have to find a contact, usually, it's on their website, once you have this email contact address, asking them to send you what they hold as personal data about you. You can even ask them to stop using these personal data for commercial purposes.

     Obviously, when you send this email you must attach an ID to prove that it is indeed access to your personal data, and if the company does not answer you, you address yourself to the CNIL " CNIL means Commission Nationale de l'Informatique et des Libertés. 

     The CNIL was established in 1978. It is an independent administrative authority that ensures that information technology does not infringe on freedoms, rights, human identity or privacy. "By presenting the justification that you have done the right thing, but that the company has not responded, the CNIL will take care of it in order to do you justice.

          General Data Protection Regulations

European Union law

      *1 Article 4 of the General Data Protection Regulation defines personal data as "any information relating to an identified or identifiable natural person [...]; an 'identifiable natural person' is one who can be identified, directly or indirectly, in particular by reference to an identifier, such as a name, an identification number, location data, an online identifier, or to one or more factors specific to his or her physical, physiological, genetic, mental, economic, cultural or social identity".

     This definition is slightly more extensive than the one contained in Article 2 of Directive 95/46/EC, which stated that the person becomes identifiable "in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity "4.

    *1 Source: Wikipédia

    *2 Regulation No 2016/679, known as the General Data Protection Regulation (GDPR), is a European Union Regulation that constitutes the reference text on the protection of personal data. It strengthens and unifies data protection for individuals within the European Union.

     After four years of legislative negotiations, this regulation was definitively adopted by the European Parliament on 14 April 2016. Its provisions are directly applicable in all 28 Member States of the European Union from 25 May 2018.

     The Regulation replaces the Directive on the protection of personal data adopted in 1995 (Article 94 of the Regulation); unlike Directives, Regulations do not require the Member States to adopt transposing legislation in order to be applicable.

     The main objectives of the DPMR are to increase both the protection of data subjects when processing their personal data and the accountability of those involved in such processing. These principles can be achieved by increasing the power of the regulatory authorities.
*2 Source: Wikipédia

Share:

What is the Internet of Things? IoT

           What is the Internet of Things?

IoT

      The Internet of Things IoT is a network of networks that allows, via standardized and unified electronic identification systems and mobile wireless devices, to directly and unambiguously identify digital entities and physical objects, and thus to be able to retrieve, store, transfer, and process the related data seamlessly between the physical and virtual worlds.

     The Internet of Things, or IoT for the Internet of Things, is primarily a concept and not a specific technology or device. The Internet of Things is the will is to extend the Internet network and thus the data exchanges to the objects of the physical world.

     But these objects connected to the internet can take the form of an everyday object: for example, the car connected, the watch connected, the glasses connected, etc. 

     As the IoT is growing, the list of connected objects is very extensive and is constantly growing.

     A connected object has the peculiarity that it does not function autonomously but is able to communicate and transmit information with other connected objects. 

     Connected objects are also called intelligent objects, because it is a question of machine-to-machine communication without the need for a human as an intermediary, and this is the biggest change! since until now computers and therefore the internet needed humans to supply it with data.

     Today, the Internet of Things is a bridge between the physical world and the virtual world.

     Imagine: you come home after work, you are stressed your heart rate will be higher so your connected watch will transmit this information to the different connected objects Your connected radio will then decide to put on soft music to relax, Your phone can switch to do not disturb mode, Then the temperature of your apartment will also adapt with information on the weather outside, and the interior lighting of your apartment will adapt according to the brightness outside.

     The purpose of the connected objects is to adapt not only to your needs but also to your environment.

     Each connected object has a standardized digital identity, such as an IP address for example, possible via a wireless communication system (RFID chip, Bluetooth, wifi...) Finally, these objects are equipped with sensors: it is these sensors that retrieve the information.

     Personal data

     Indeed, connected objects produce large amounts of data, and processing this mass of data involves new concerns, particularly around data confidentiality and security.

     The security of the IoT has come under scrutiny after a number of high-profile incidents in which a common IoT device was used to infiltrate and attack the wider network. The implementation of security measures is essential to ensure the security of the networks to which IoT devices are connected.

           IoT Security Challenges

securiter-iot

      There are a number of challenges to securing IoT devices, and end-to-end security in an IoT environment.

     Because the idea of network devices and other objects is relatively new, security has not always been considered a top priority during the design phase of a product. 

     Moreover, as IoT is a nascent market, many product designers and manufacturers are more interested in getting their products to market quicker than in taking the necessary steps to enhance security from the outset.

     A major problem cited with IoT security is the use of hard-coded passwords. "Hard-coding passwords" is the use of unencrypted (plain text) passwords and other secret data (such as private keys) in the source code. 

     This can lead to "hard-coding" passwords, or default passwords, which can lead to security breaches.

     Even if passwords are changed, they are often not strong enough to prevent infiltration.

     Another common problem with IoT devices is that they are often resource-limited and do not contain the computing resources needed to implement enhanced security.

     IoT security also suffers from a lack of industry-accepted standards. Although there are many IoT security frameworks, there is no single agreed framework. 

     Large companies and industry organizations may have their own specific standards, while some segments, such as the industrial IoT, have proprietary and incompatible standards from industry leaders.

     The variety of these standards makes it difficult not only to secure systems but also to ensure their interoperability.

     The convergence of IT and operational technology (OT) networks has created a number of challenges for security teams, particularly those responsible for system protection and end-to-end security in areas outside their area of expertise.
  
           Integration of safety in design

      IoT developers should include security at the beginning of any consumer, professional or industrial device development. Enabling default security is essential while providing the latest operating systems and using secure hardware.
Internet security.

     Protecting an IoT network includes ensuring port security, disabling port forwarding and never opening ports when not needed, using anti-malware, firewalls and intrusion detection/intrusion prevention systems, blocking unauthorized IP addresses, and ensuring systems are patched and up to date.

           Consumer education

       Consumers must be informed about the dangers of IoT systems and the measures to be taken to stay safe, such as updating default credentials and applying software updates.

     Consumers can also play a role by forcing device manufacturers to create safe devices and refusing to use those that do not meet high safety standards.

Share:

Big Data: What is Big Data?

          What is Big Data?

serveur-big-data

      It is often defined as a means of responding to a massive volume of data, hence the term BIG DATA.

     The problem with this definition is that it overlooks a fundamental concept of Big Data because Big Data is for dealing with large volumes of data, but the main challenge of Big Data is to enhance the value of these data, whatever their volume.

     Technological transformations necessary to enhance this data

      Today, companies are facing an exponential increase in data. To give you a more precise idea, you should know that this mass of data can reach up to several petabytes of data and that this data is of various natures.

     For example, we can have data from logs, social networks, e-commerce transactions, data analysis, the Internet of Things, images, audio, video, etc..

     Of course, many companies want to take advantage of this data (whether it is data they have collected themselves or public data), such as data from the web or Open Data.

     Traditional data processing technologies such as business intelligence or databases were not designed to handle such a volume of data, and to extract value from this data, will only be possible for a company by going beyond the limits faced by traditional information systems, these limits are five, these are the 5V.

           What is the 5V

      The first V corresponds to Volume, it is the explosion of data volumes that must be processed and analyzed, and it is this aspect that has been talked about mainly so far.

     The second, Variety, is the difficulty of storing, interpreting and efficiently cross-referencing these increasingly diverse and multiple data sources.

     The third, Velocity, is the speed at which data is generated, captured and shared.

     Not only consumers, but also businesses are generating more and more data, and all this in much shorter timescales.

     However, there is still a time lag between the processing and analysis of this data and the speed at which it is generated, and companies can only capitalize on this data if it is collected and shared in real-time.

     The fourth, Value, is about monetizing a company's data, but also measuring the return on investment of implementing Big Data.

     Finally, the fifth, Verity, is the ability to have reliable data available for processing according to the confidence criterion, the data will be given more or less importance.

     For example, among the data that may need to be trusted are data from social networks, whose source and objectivity is difficult to assess.

     And it is in the face of these constraints that Big Data will be able to propose a set of technologies that will make it possible to overcome these five limits at once.
   
marketing

      The processing of these data, and their valorization, will then be done thanks to the implementation of a Big Data architecture, it is about the implementation of a platform allowing a collection of the company's data. These data are often stored in a Data lake (which is a universal data warehouse), and then these data will be analyzed and monetized.

     This is the purpose of Big Data. The Big Data will, therefore, shift the focus of a company to data and especially to the value, it will generate for the company. Hadoop is a free, open-source framework created by the Apache Software Foundation, the principle is that it will take files, transform them into large blocks and distribute them to a cluster of machines for processing.

     In terms of scale, we're still talking about a volume of data of several petabytes, and on a cluster of several thousand machines. Seen that way, it's still more impressive and that's why Hadoop allows the creation of distributed and scalable applications, which fits well with the needs of Big Data.

     But the main reason for Hadoop's success is not technical, it's an economic reason, because before, to process a huge volume of data, supercomputers and specialized hardware were needed. Hadoop has made it possible to perform calculation operations on 1 petabyte of data on standard servers in a reliable and distributed way and therefore at a lower cost.

          What is NoSQL

      NoSQL databases, for Not Only SQL, refer to a family of database management systems that will move away from the classic model of SQL relational databases. NoSQL databases will have a simpler and more flexible database architecture than traditional relational SQL databases.

     NoSQL solutions allow storing a database on a maximum number of machines, which will result in a distributed database allowing to dynamically distribute the load.

     In the end, NoSQL databases allow for high performance in terms of data processing, scalable architecture, and the ability to manage the variety of data, which corresponds to the needs of Big Data, which is why NoSQL is so popular.

Share:

Augmented Reality Glasses in 2020 by Apple

Augmented Reality Glasses and Mac ARM at Apple 2020

glasse-Augmanted-reality

      In a report released Monday morning, Apple plans to re-launch its long-awaited Augmented-Reality AR smart glasses and Mac ARM in 2020.

     Fuelling an impressive amount of rumors, Apple's potential Augmented Reality AR glasses are expected to be released in 2020. 

     It's all set to happen next year, according to Mark Gurman, an Apple journalist on Bloomberg's website, a prediction that echoes Ming Chu-Kuo's prediction made last March.

     After several sleep cycles, 2020 looks like a good year for consumer gadgets. As fifth-generation wireless networks gradually proliferate, Big Tech will be preparing devices capable of taking advantage of their faster speeds.

Augmented Reality gets stronger with new consumer gadgets

     The coming year will be crucial for Apple Inc. Consumers expect the deployment of their most impressive hardware to take place sometime in the future: the first major upgrade of the iPhone is expected in 2017, including support for 5G, a much stronger processor and a rear-facing 3D camera.

     The latter will give the phone a better idea of its position in physical space, improving the accuracy of object placement in augmented reality applications, which superimpose virtual images on the real world. This could help users model, for example, the location of images on their walls.

     Such applications are at the heart of Apple's long-awaited AR lenses, which are expected to feature holographic displays. Apple has targeted 2020 for the release of its AR headset, an attempt to succeed where Google Glass failed years ago.

     The glasses are supposed to sync with the wearer's iPhone to display items such as text, email, maps and games in the user's field of vision.

     The company considered including an "augmented-reality-apps" App Store with the headset, as is the case with Apple TV and Apple Watch broadcast devices. 

     Experts in graphics and game development are being called in to establish the glasses as a leader in a new product category and, if all goes well, a potential successor to the iPhone.

     Among the hypotheses put forward is the possibility of associating the glasses with an iPhone so that notifications are displayed superimposed on the presumed holographic tiles. 

     Following a route calculated from Maps (or Google Maps) could also be part of the information that could be used by the glasses. Of course, games specially developed for the glasses would also be on the menu.

Apple, Facebook and Amazon: who will win the battle of the AR?

     However, considering the stakes that RA represents for Apple, Cupertino could very well take even more time to perfect this product. 

     As a reminder, other tech giants such as Samsung, Amazon, and Facebook have openly embarked on the reconquest of the glasses 2.0 market, a path opened by Google and its Glass.

     Now entirely devoted to the professional world due to a lack of success in the general public sphere, a lack of interest that can be explained by an inconclusive range of applications and unconquered consumers. 

     A lack of fervor which is mainly explained by the exorbitant price of the Glasses.
Share:

iPhone 11 2019 : release date, price, datasheet, everything you need to know

          iPhone 11 what's new

iphone11-couleur

        iPhone 11 2019 has taken up the same bases as its predecessor the iPhone 10R, which was launched last year, with the same positioning, we take up the same bases the very colorful aluminum case, in six colors this year, with a glass back for wireless charging, and on the front, an LCD screen almost edge to edge with edges a little thicker than on high-end iPhones, but nevertheless, it's very nice.

     Who says same screen and the same case, says the same ergonomics, it fits rather well in the hand, it's a good compromise in terms of size, with its 6.1 inches diagonal, so externally in notes quite a few changes, there is nevertheless a huge one, and that's the autonomy.

     This is the first time an iPhone will rank among the best smartphones in this field. The iPhone 11 will indeed not have one hour more battery life, compared to the 10R as promised by Apple, but more than two hours, so it should be possible to go all day without too many problems.

   
Apple-a13-processeur

       Apple has succeeded in this tour de force, on the one hand, it has increased the size of the battery, it has also introduced a new dynamic battery management system and above all it relies on its new Bionic A13 chip, to work better while consuming less.

     The bionic A13, despite its 8.5 billion transistors, these six high performance 4 low consumption cores, that is four cores for the graphics part, will manage to consume up to -30% of electricity for the same task, which is, therefore, a real performance, which explains the previously mentioned autonomy.

     But it is not only more economical, but it is also much more powerful than the bionic A12, which equipped last year's iPhones. According to the tests for the CPU part, we have a power gain that will oscillate between +20% and +40%, for the graphics part, we will note results up 40 to 90%, depending on the tests and what we will test.

     But this power gain brought by the A13 Bionic, will not only be again in CPU and GPU, but it will also be manifested by a Face ID much faster to operate, and then this power gain will also be manifested in the field of photography.
 
camera-photo

       Last year the iPhone 10R was only allowed one photo module with digital zoom, which wasn't necessarily great, this year the iPhone 11 is allowed two photo modules, a wide-angle and an ultra-wide-angle, so one equivalent to 26 and one equivalent to 13 mm.

     One regrets a little bit the absence of a telephoto lens - a 52 mm equivalent, but you have to leave something to the pro iPhones, in addition to introducing a second photo module in your iPhone 11.

 Apple promised a perfect transition between the two lenses when you shoot, it also has a light and color treatment that would be very balanced, in fact, the result is almost perfect, there are sometimes small details that change some colors a bit more saturated when you change lenses, but to the eye frankly it's a very good result.

     On the wide-angle part obviously, a question arises, are there deformations? the answer is yes, but in a very controlled way we will sometimes see that straight lines are slightly curved, there could have been a little more digital corrections, we will say from a software point of view, nevertheless, the result is rather convincing.

     The iPhone 11 is entitled to a big novelty of the photo mode, is the night mode, Apple very faithful to its way of doing, to automate this night mode so you will not have any question to ask yourself and will be automatically triggered when you need it you can obviously deactivate it.

     The video now, you can if you want to shoot in 4k 60 frames per second, there is also a gain in brightness, but above all what will impress is the stabilization of videos that is really incredible.

     Like the iPhone 10R, the iPhone 11 is positioned as the candidate for compromise. The compromise between an acceptable price, or a little more affordable for an iPhone, performance equal to high-end models, and ergonomics that are quite good with this 6.1-inch screen.

     Nevertheless, we would advise you not to go for the entry-level model at 809 euros but to spend 50 Euro + at 859 euros so as to have 128 Gigabytes of storage because 64 Gigabytes in 2019 is very low.

     Fiche technique :

Screen size:                   6.1 inches
     
Camera:                         12 Mpx with 2 sensors

Definition:                     1792 x 828 Pixels

Processor:                      Apple A13 Bionic

RAM:                            4 GB RAM

Battery:                         3110 mAh

Expandable storage:     microSD

Memory:                       64 GB, 128 GB, 256 GB
Share:

Deep Web, The Dark Side Of The Internet

Deep Web, The Dark Side Of The Internet

deep-web-dark-web

       The deep web, also called the invisible web, described in the architecture of the web, the part of the Net not indexed by the major search engines known.

          Understanding the Web and the Internet 

     A quick definition of these two concepts is necessary before getting to the heart of the matter. The Internet is a network of computer networks, made up of millions of both public and private networks.

     Information is transmitted over it using HTTP or HTTPS data transfer protocols that enable a variety of services, such as e-mail, peer-to-peer or the World Wide Web, more commonly known as the Web.

     In other words, the Web is one application among many that use the Internet and its millions of networks as a physical medium and means of transport, just like e-mail. 

     It is an information network made up of billions of documents scattered on millions of servers around the world and linked to each other according to the principle of hypertext.

     The web is often compared to a spider's web because the hypertext links linking documents together could be likened to the threads of a web and the documents to the nodes where these threads intersect.

     And the web is itself composed of two parts: the visible web and the invisible web, more commonly called the Deep Web

     But to understand what the Deep Web really is, we should first talk about the visible web, indexing robots, the opaque web, and deep resources.

     The visible web, also called the surface web, is the internet content that can be accessed via classic search engines such as Mozilla Firefox, Internet Explorer, Google Chrome, Yahoo, Bing, etc... It, therefore, includes all sites and pages indexed and referenced by these search engines.

     For example, when you type "dailytechmonde" on Google, you will find a direct link to a website. 

     In other words, a page indexed on a referenced website. In order to offer you this page, the search engine in question has searched a database it has previously created by indexing all possible web pages.

     It has thus, long before, tried to understand the content of all these pages in order to be able to propose them to the user when he carries out a keyword search. I talk about keywords because that's what we use most of the time with the different search engines.

     To discover new pages and constantly update their databases, search engines use certain programs, the famous crawlers, and indexing robots that follow hyperlinks.

bots-google


      We can also talk about "crawlers" or "bots", which is a simple contraction of the term "robots". Once a website is indexed by these robots, its content can then be found on demand.

     But despite significant material resources, crawlers are not able to follow all the theoretically visible links that the Web contains. 

     In order to study the behavior of crawlers when faced with sites containing a large number of pages, a team of German researchers has, for example, created a website with more than 2 billion pages.

     This website was based on a binary structure and is very deep, it took at least 31 clicks to get to some pages. They left the website online for one year without any changes. 

     And the results showed that the number of indexed pages for this site, in the best case, did not exceed 0.0049%. This part of the web, theoretically indexable, but not indexed in fact by the engines is nicknamed the "opaque web", which is located right between the visible web and the deep web.

    So, the visible web can be indexed and it is the case. The opaque web can be indexed, but it is not.

          The Deep web cannot be indexed

     In order for a website to be indexed by crawlers, then placed in the database by indexing robots and thus be referenced by search engines, it must comply with certain standards.

     These standards concern the format, the content or the accessibility of the robots on the site. Namely that a website can have at the same time pages that do not comply with these standards and pages that comply with them, in which case only the latter will be referenced.

     All websites that are directly accessible via search engines, therefore, respect a minimum of these standards. The referenced pages of all these sites form what is called the visible web: the part of the web that respects these standards. But it would represent only 4% of the total web.

     The remaining 96% is the so-called deep resources: pages that do exist on the web but are not referenced by search engines for many reasons.

     Starting with the failure to comply with established standards, but not only. These deep resources, which would, therefore, represent 96% of the entire web, form what is called the "Deep Web", also called the invisible web, the hidden web or the deep web.

     I use the conditional percentage because this ratio varies according to the studies that have been carried out. For example, according to some specialists in 2008, the Deep Web would, in fact, represent only 70% of the Web, or about a trillion non-indexed pages at the time.

     A July 2001 study conducted by the BrightPlanet company estimated that the Deep Web could contain 500 times more resources than the visible web. 

     According to Chris Shermann and Gary Price in their book "The Invisible Web", the visible web would represent 3 to 10% of the Web, so 90 to 97% for the Deep Web. According to a Canadian researcher at the end of 2013, it would be more in the order of 10% for the visible web and 90% for the Deep Web. 

     And according to a study published in the journal Network, any search on Google would simply provide 0.03% of the information that exists online. So 1 page out of 3,000 existing pages.

     The percentage that stands out most often is still 4% for the visible web and 96% for the Deep Web. Just keep in mind that the visible web is actually only a tiny part of the entire web.

     And that's why the iceberg metaphor is often used as a representation. The emerged part represents the visible web, and the submerged part, the famous deep resources that make up the Deep web.

     Moreover these resources, in addition to being large, are often of very good quality, because the compression of files is less consequent. But back to indexing. 

    There is a multitude of sites, pages, and documents, which the classical search engines cannot reference. Either because they simply don't have access to these pages, or because they can't understand them.

     There are a multitude of reasons, but if we were to list the main ones, they would be :

* Unrelated content.
* The script content.
* The non-indexable format.
* Content that is too large.
* Private content.
* Limited access content.
* The Internet of Things.
* Dynamic content.
* Content under a non-standard domain name.

     It goes without saying that some websites combine several of these factors. As far as unrelated content is concerned, some site pages are simply not linked to each other by hyperlinks, and therefore cannot be discovered by indexing robots that only follow hyperlinks. This is called pages without backlinks.

     As far as script content is concerned, some web pages contain scripts such as Javascript or others, which can sometimes block access to the robots, often without doing it on purpose. The use of the Javascript language, sometimes misunderstood by robots, to link pages together, is also a hindrance to their indexing.

     As for the non-indexable format, the Deep Web is also made up of resources using data formats that are incomprehensible to search engines. 

     This has been the case in the past, for example, with the PDF format, or those of Microsoft Office, such as Excel, Word or PowerPoint. The only format initially recognized by robots was the native language of the web, namely HTML. 

     But search engines are gradually improving to index as many formats as possible. Today, they are able to recognize in addition to HTML, PDF, those of Microsoft Office, and since 2008, pages in flash format.

     As far as oversized content is concerned, traditional search engines only index between 5 and 60% of the content of sites accumulating large databases. 

     This is the case, for example, of the National Climatic Data Center with its 370,000 GB of data, or the NASA site with its 220,000 GB. 

     The engines therefore partially index these voluminous pages. Google and Yahoo, for example, stop indexing from 500 KB.

     As for private content, some pages are inaccessible to robots, due to the will of the website administrator. 

     The use of the file "robots.txt" inserted in the code of a site, allows to authorize the indexing only of certain pages or documents of the site and thus to protect its copyright.

     For example, if you do not want some of the images or photos on your site to appear on Google Images, or to limit visits and keep the site from too frequent access.

Google-Search-Console


     But it is not uncommon that a robots.txt put at the root of a website completely blocks the indexing and SEO of the entire site. Indeed, some people deliberately choose not to reference their site to privatize the information.

     The only way to access their page is therefore to know the URL of their page in its entirety. The site developer can then choose to distribute the address to a few people in a specific community, for example on a forum like Reddit or 4chan, and these people can then circulate it by word of mouth. This is exactly the same operation as Discord servers, for example.

     This is what is more commonly known as the private web, which is a category related to the Deep Web, and which is quite similar to the Dark Net.

     As far as limited access content is concerned, some websites require authentication with a login and a password to access the content. This is more commonly known as the proprietary web.

     This is the case, for example, of some sub-forums, or some sites with paid archives, such as online newspapers, which sometimes require a subscription. Some sites also require you to fill out a captcha, or Turing test, to prove that you are human and thus access the content.

     Still, other sites sometimes require you to fill in a search criteria form to be able to access a specific page. This is the case, for example, of sites that use databases.

     As far as the Internet of Things is concerned, also known as the IoT, the Internet of Things is the grouping or rather the network of all connected physical objects with their own digital identity and capable of communicating with each other.

     From a technical point of view, the IoT consists of the direct digital identification of one of these objects, thanks to a wireless communication system, which can be either Wifi or Bluetooth.

     However, some of them have a URL, even though they are in HTTP, but are not indexed by traditional search engines, because on the one hand, it would be useless. And on the other hand, it could lead to certain excesses.

     But some specialized search engines such as Shodan, don't care about these drifts and allow you to do much more in-depth searches, especially in the Internet of Things.

     You can then stumble upon specialized pages for connecting to connected objects. For example, with real-time vehicle tracking, or even unprotected video devices. It can just as easily be surveillance cameras, such as private webcams that do not require a password for access.

     So you understand the problems that can arise. I take this opportunity to advise you to always unplug your webcam when you are not using it. 

     And if it is included in your laptop, at least put something on it to hide the camera. In which case the microphone of your webcam will always be operational, don't forget it. That's why it's always better to unplug it when you can, rather than just hide the lens.

     As far as dynamic content is concerned, websites contain more and more dynamic pages. However, in this case, the navigation hyperlinks are generated on demand and differ from one visit to another.

     Basically, the content of the pages fluctuates according to several parameters and the links change according to each user, thus preventing indexing.

     For example, let's say you want to take a ticket to go from Paris to Marseille. You type SNCF on Google, go to the site, then to the search page, and enter your information in a form, such as the names of the cities, your ranking, your age group, days, times, etc. 

     Once confirmed, you then arrive on a well-defined SNCF page, generated thanks to filters in its database, following the information you have provided.

     This page which shows you very specific train timetables with available fares, you can't find it directly by doing a Google search with keywords, we agree.

     It is, therefore, a page that is not indexed by any search engine. I imagine that all of you have already done this kind of SNCF search at least once. Well, congratulations! You were on the deep web at the time.

     Finally, as far as content under a non-standard domain name is concerned, these are websites with a domain name whose DNS resolution is not standard, with for example a root that is not registered with ICANN.

The Internet Corporation for Assigned Names and Numbers. In other words, the society for the assignment of domain names and numbers on the Internet.

     The roots of domain names known by ICANN are.COM,.FR,.CO,.GOV and many others following the countries. But there are non-standard domain names that are only accessible via specific DNS servers.

     The Domain Name System, the domain name systems, are services that allow a domain name to be translated into several types of information associated with it. In particular the IP address of the machine bearing this name.

tor-onion

     The best known and most interesting example is the .onion root, which can only be resolved via the Tor Browser on the Tor network. I'm talking about the famous Dark Net, which provides access to much of the less accessible side of the Deep Web, the Dark Web.

     In any case, you just have to understand that there are many, many cases where traditional search engines are unable to list a site or at least some of its pages.

     All these inaccessible pages, at least in a direct way via search engines are therefore called the Deep Web resources and form what is called the Deep Web.

     The average user, therefore, navigates every day on a minor part of the Web, the visible Web. From time to time, he or she may surf the Deep Web without realizing it, as with the example of the SNCF reservation.

     After I took this example, but there are plenty of other cases where you are surfing the Deep Web.

For example, when you check your emails on your Gmail, you are on the Deep Web.

     When you consult your customer area on your telephone operator's website, you are on the Deep Web.

     When you're viewing a shared document on Google Drive, you're on the Deep Web.

     If you're in a company that has an internal network, often called the intranet, and you go there, you're on the Deep Web.

     When you talk to your friends on a Discord server, you are on the Deep Web.

     When you check your bank accounts online, you are on the Deep Web.

     The Deep Web is your mailbox, your administration spaces, your company's internal network, dynamic web pages and a lot of other things.

     And the Deep Web is likely to become a much larger part of the web in the years to come, as the Cloud becomes more and more important.

     All the articles and reports that say that you only surf the visible web every day are therefore wrong. Of course, the visible web is surely the one you use the most. But I imagine, for example, that you check your email every day, so you go to the Deep Web every day.

     The Deep Web has nothing good or bad as some might think. It's just a technical specificity. There is no dark side of the Net, just areas ignored by some engines. 

     The problem, as you will have understood, is that a lot of articles and reports confuse the Deep Web and the Dark Web. They talk about the Dark Web by calling it the Deep Web, but it's not the same thing.

     As a result, the Deep Web is wrongly demonized by the media and the general public gets a completely biased image of it.

          The difference between the Deep Web and the Dark Web

     When I listed the main reasons why some web pages are not indexed, I mentioned those with a non-standard domain name. In other words, URLs that do not end in.COM,.FR,.CO,.GOV and so on, depending on the country.

     Sites that are not referenced by the classic search engines, because their domain name is not registered with the ICANN. The majority of them were created to voluntarily avoid any referencing. And their URLs can only be translated between quotation marks via specific DNS servers.

     The best-known example is the .onion root, which can only be resolved via the Dark Net Tor, allowing access to much of the least accessible side of the Deep Web, the Dark Web.

     Thus, the so-called Dark Web is a sub-part of the Deep Web and is the set of pages that can only be accessed by having a direct .onion link to the Dark Net Tor.

     Again, there's nothing good or bad about this. It's just a technical specificity. And why do I also want to differentiate the Dark Net from the Dark Web? Because the Dark Web is about content and the Dark Net is about infrastructure.

     In other words, the technical ways in which this content is created and made available. In other words, there is not just one Dark Net, but several.

     So let me summarize. The Internet is a network of computer networks, made up of millions of networks, both public and private, which circulate all kinds of data.

     The World Wide Web, or the Web if you prefer, is one application among many that use the Internet as a physical medium and a means of transport to find this data.

     The Web has two distinct parts: the visible web and the invisible web, more commonly known as the Deep Web.

     The Deep Web exists for a number of reasons that we have seen. And one of them concerns special domain names.

     The networks grouping together these sites with these special domain names are called the Dark Nets. And the content that we find on these Dark Nets is called the Dark Web.
Share:

Labels

Recent Posts

Unordered List

  • Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
  • Aliquam tincidunt mauris eu risus.
  • Vestibulum auctor dapibus neque.

Pages

Theme Support

Need our help to upload or customize this blogger template? Contact me with details about the theme customization you need.