History of neural networks

neural networks
The first neural network was conceived of by Warren McCulloch and Walter Pitts in 1943. They wrote a seminal paper on how neurons may work and modeled their ideas by creating a simple neural network using electrical circuits.

How neural networks started?

The first neural network was conceived of by Warren McCulloch and Walter Pitts in 1943. They wrote a seminal paper on how neurons may work and modelled their ideas by creating a simple neural network using electrical circuits.

This breakthrough model paved the way for neural network research in two areas:

Biological processes in the brain.

The application of neural networks to artificial intelligence (AI).

AI research quickly accelerated, with Kunihiko Fukushima developing the first true, multilayered neural network in 1975.

The original goal of the neural network approach was to create a computational system that could solve problems like a human brain. However, over time, researchers shifted their focus to using neural networks to match specific tasks, leading to deviations from a strictly biological approach. Since then, neural networks have supported diverse tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games, and medical diagnosis.

As structured and unstructured data sizes increased to big data levels, people developed deep learning systems, which are essentially neural networks with many layers. Deep learning enables the capture and mining of more and bigger data, including unstructured data.

Why are neural networks important?

Neural networks are also ideally suited to help people solve complex problems in real-life situations. They can learn and model the relationships between inputs and outputs that are nonlinear and complex; make generalizations and inferences; reveal hidden relationships, patterns and predictions; and model highly volatile data (such as financial time series data) and variances needed to predict rare events (such as fraud detection). As a result, neural networks can improve decision processes in areas such as:

  • Credit card and Medicare fraud detection.
  • Optimization of logistics for transportation networks.
  • Character and voice recognition, also known as natural language processing.
  • Medical and disease diagnosis.
  • Targeted marketing.
  • Financial predictions for stock prices, currency, options, futures, bankruptcy and bond ratings.
  • Robotic control systems.
  • Electrical load and energy demand forecasting.
  • Process and quality control.
  • Chemical compound identification.
  • Ecosystem evaluation.
  • Computer vision to interpret raw photos and videos (for example, in medical imaging and robotics and facial recognition).

Types of neural networks

There are different kinds of deep neural networks – and each has advantages and disadvantages, depending upon the use. Examples include:

  • Convolutional neural networks (CNNs) contain five types of layers: input, convolution, pooling, fully connected and output. Each layer has a specific purpose, like summarizing, connecting or activating. Convolutional neural networks have popularized image classification and object detection. However, CNNs have also been applied to other areas, such as natural language processing and forecasting.
  • Recurrent neural networks (RNNs) use sequential information such as time-stamped data from a sensor device or a spoken sentence, composed of a sequence of terms. Unlike traditional neural networks, all inputs to a recurrent neural network are not independent of each other, and the output for each element depends on the computations of its preceding elements. RNNs are used in fore­casting and time series applications, sentiment analysis and other text applications.
  • Feedforward neural networks, in which each perceptron in one layer is connected to every perceptron from the next layer. Information is fed forward from one layer to the next in the forward direction only. There are no feedback loops.
  • Autoencoders are used to create abstractions called encoders, created from a given set of inputs. Although similar to more traditional neural networks, autoencoders seek to model the inputs themselves, and therefore the method is considered unsupervised. The premise of autoencoders is to desensitize the irrelevant and sensitize the relevant. As layers are added, further abstractions are formulated at higher layers (layers closest to the point at which a decoder layer is introduced). These abstractions can then be used by linear or nonlinear classifiers.

Neural networks have the ability to identify anomalies. In the future, we can use them to give doctors a second opinion – for example, if something is cancer, or what some unknown problem is. And we’ll be able to provide these second opinions faster and with more accuracy.

Share:

Share on facebook
Facebook
Share on twitter
Twitter
Share on pinterest
Pinterest
Share on linkedin
LinkedIn

Leave a Reply

Related Posts

drowning

Drowning

Drowning is the 3rd leading cause of unintentional injury death worldwide, accounting for 7% of all injury-related deaths.

Pose-Estimation-blog

Deep learning based human pose estimation

Pose estimation is calculated by using computer vision to detect the position and orientation of an object. This usually means detecting key point locations that describe the object.

deep learning

Deep learning

Deep learning is a type of that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations

Artificial Intelligence

Artificial intelligence history

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Stay in Touch

Subscribe or updates and get them direct in your email

The artificial intelligence uses frame by frame comparison to detect an object that was not previously there or the disappearance of an object from the field of view.  It learns what ‘normal’ looks like and spots differences.  In ‘Supervised’ mode, alerts to objects such as pool toys or outdoor furniture being moved will be suppressed to avoid false alarms.  When you leave the pool area a new ‘normal’ is established.

By using a combination of two cameras, one to identify individuals as they enter the designated area and the other to monitor the whole area, the artificial intelligence can keep track of an identified individual for as long as they remain within the field of view.  Both cameras are connected to the same processor so the first can pass the identity to the second, allowing the second to continue showing the identity of the individual even when their face is not visible to the camera.

In case you are concerned about privacy, be assured that nobody sees the feed from your camera unless an emergency is detected and not acknowledged locally.  Instead, the artificial intelligence identifies key points on the human body such as shoulders, elbows, wrists, hips, knees and ankles.  It uses the relative position of these key points to determine the pose of the body and has been trained to recognise poses that indicate danger. 

‘Supervised’ mode is designed for use when swimming is planned, with a responsible adult present.  It won’t bother you with constant alerts as people enter the area but the system will still raise the alarm if someone disappears underwater for longer than you have deemed acceptable. 

We strongly encourage use of a Pool Angel lanyard during such sessions so that there is no doubt over who has assumed responsibility for keeping watch over children in the pool.  Child drownings can happen even with multiple adults present if they all assume that someone else is paying attention.  Pool Angel offers you an added layer of protection; by comparing the number of people detected frame by frame, the artificial intelligence can spot when someone is missing and raise the alarm.    

Because the artificial intelligence can learn from experience it can learn to tell the difference between your pet and local wildlife that might encroach on your pool area.  This means that you can keep your pets safe without being disturbed by false alarms during the night when animals may encroach on your pool area; although you might be intrigued to view clips of your nocturnal visitors in the morning.  A short video clip is stored each time something is detected.

If an adult is detected in the pool area the system will alert you and prompt you to switch to ‘Supervised’ mode if you haven’t already done so.  This mode is designed for planned use of the pool and will suppress alerts to entry and exit from the pool area.  When the last adult leaves the area the system detects that too and prompts you to switch back to keeping watch over the empty pool.  An emergency alarm is raised if the departure of that adult leaves an unsupervised child in the pool area.

Although we refer to the boundary around a swimming pool, the camera can be used to keep watch over any boundary you designate. It can keep watch over a trampoline, climbing frame, the tool shed, any area that could present a danger to unsupervised children. By comparing what was present in a previous frame with what is currently in frame, the artificial intelligence can detect the arrival of something or someone new in the designated area.