artificial intelligence

adminJuly 27, 2018
tot688-1.jpg

7min

Ways of committing crimes have drastically evolved throughout the years; from cavemen using sticks and stones to modern men using devious tactics to steal, kill, and dupe other people. As the world continued to advance, people also found ingenious ways and developed sophisticated technology to help them protect their properties and well-being in general.

At present, you will see that most gadgets employ artificial intelligence software and various technologies, like biometrics, for instance. Some devices use fingerprint scanners; others scan your entire face; other devices just scan your iris.

As people got smarter in setting up safeguards, most thieves adapted and learned how to circumvent these safety measures. Speaking about theories, it is believed that iris scanners can be duped by using an eyeball plucked from another person. For those of you who have read Angels and Demons by Dan Brown, you will probably remember that the antimatter was stolen because the thief cuts out the physicist’s eyes and used it to fool the biometric scanners. That, indeed, is a pretty disturbing thought.

So, now, the most important question that should be asked is this: “Is there any way for iris scanners to confirm if a scanned eyeball belongs to a living person?” Recently, a group of researchers based in Poland have studied whether a machine learning system could tell between living eyeballs from dead ones.

Living Eyeballs

What Exactly is Iris Recognition

To inform you a little about iris scanners, it uses iris recognition in its processes. Now, iris recognition is an automated process of biometric identification that utilizes mathematical pattern-recognition techniques on video images of one or both of the irises; its complex patterns are distinctive and stable, and it can be seen from a distance.

Some people might confuse iris recognition with retinal scanning. In order to point out the difference between the two, retinal scanning is an ocular-based biometric technology that uses the distinct patterns on an individual’s retina blood vessels and is frequently confused with iris recognition. Iris recognition, on the other hand, uses video camera technology with subtle near-infrared illumination to obtain imagery of the intricate and detailed structures of the iris that are visible.

Iris Recognition

How Dead Eyeballs can be Distinguished from Living Ones

Thanks to Mateusz Trokielewicz at Warsaw University of Technology in Poland and a couple of his colleagues, we may have an answer to that glaring question. This group of researchers has created a database of irises that were scanned from both living people and dead bodies and then trained a machine-learning algorithm to recognize the difference.

Trokielewicz says that their algorithm can differentiate a living iris from a dead one with an accuracy of 99%. Even with this high accuracy percentage, is it still possible to beat the detection system?

Mateusz Trokielewicz

Efforts in Fool-proofing the System

The system was made fail-safe by means of an unusual database: the Warsaw BioBase PostMortem Iris dataset that contains 574 near-infrared iris images that were gathered from 17 people at different times of death; the dates of the images range from five hours to 34 days after death.

The group of researchers also gathered 256 images of live irises. In order to avoid future technical hitches, they were cautious enough to use the same iris camera that was used on the cadavers. They also cropped the images so that only the iris is shown in order to further fool-proof their results.

Efforts in Fool-proofing the System

Even though Trokielewicz and his colleagues were able to yield very good results, there is still one drawback: the system’s accuracy does not apply to fresh irises. It can only produce a valid result to irises that have been dead for 16 hours or more.

Certainly, this will give thieves and criminals a window of opportunity to perform their wicked deeds, but it will also undeniably give people some comfort in knowing that plucked eyeballs will eventually lose their ability to dupe the system in just a few hours.


adminJuly 25, 2018
tot687-1280x720.jpg

7min

Not too long ago, during Google I/O this year, Google showed Google Duplex to the public. Now, it has been a long-standing objective of computer technology with regard to human-computer interaction to make it possible for people to have a smooth and natural conversation with computers.

You could note that in recent years, there has been a tremendous development in the computer’s ability to understand and generate natural speech; however, even with today’s advanced technology, it can get quite frustrating when you converse with the computerized voices in your gadgets that simply don’t understand natural language.

Most systems and software are evidently struggling to comprehend simple words and instructions; they also don’t engage in a smooth conversation flow and compels people to adjust to the system rather than the other way around. There are also times when software, like a virtual assistant, for instance, hears something completely different from what was actually said, but all that is going to change because of Google Duplex.

Google showed Google Duplex to the public

Google Duplex: What it is Exactly

Google Duplex is a fully automated system that is capable of making calls on the user’s behalf, doing so with a natural-sounding human voice and using fillers, such as “um” and “uh”, rather than the typical robotic voice that we are much accustomed to. In addition, Google Duplex is also able to comprehend complex sentences, slangs, rapid speech, and long dialogues.

Because of all these characteristics, Google Duplex is the appropriate solution to the past struggles with human-computer interaction. By letting users have a natural conversation with computers, users don’t have to keep repeating their commands and will be assured that the computer understood everything that they have said.

Gone are the days where you adjust to the system; gone are the days where you shout at your phone just to get your message or command through; gone are the days where you keep repeating your command because the system keeps hearing differently. Now, users can speak like usual, just as they would if they were speaking to another human being, without fear of not being understood.

Google Duplex

Google Duplex: What It Can Do for You

At present, Google Duplex can be used to complete three separate tasks: making reservations at restaurants, scheduling hair appointments, and getting businesses’ holiday operating hours.

Additionally, if the business establishment you are trying to book with accepts online reservations, Google Duplex will make use of that. If not, Google Duplex will call the business establishment on your behalf. All you have to do is instruct Google Assistant; for instance, saying “Call ABC restaurant and make a reservation for two people on Tuesday night at 6”, or something to that effect.

As soon as an appointment or a reservation is made, Google Duplex will immediately notify the user that arrangements have already been made.

It is, of course, a bit creepy to carry a conversation not knowing whether you’re talking to a computer or a genuine human being. Perhaps as an answer to that growing concern, Google later made an additional announcement that a call from Google Duplex would identify itself as such before starting a conversation.

Google AI

Google Duplex: Potential and Practical Use

I think we’ve all experienced calling the telephone or cable company, for instance, and listening to an automated system telling us to “press 1 to talk to a customer service agent or press 2 for accounting”. Integrating Google Duplex into their systems would certainly cause serious improvement.

Another potential use of Google Duplex would be in government offices; people would no longer wait an hour or two just to get hold of a representative since they could ask the AI system directly about their concerns.

There is, without a doubt, still room for improvement left for Google Duplex, but it is clear that it is taking the right step in ensuring a significant improvement in man’s day-to-day interaction with computers.


adminJuly 21, 2018
tot652.jpg

7min

During the last few decades, technology has been continually evolving in a rapid fashion. It started with the rise of the Internet and smartphones. Aside from that, other smart gadgets and technologies began to circulate the market as well.

At present, breakthrough technology is dramatically changing how things work in various industries in countless and unexpected ways. For instance, modern technology makes it easier for people to make accommodation and travel arrangements, and so technological advancement has boosted tourism and travel.

If you look at the manufacturing industry, technology has also played a key part in assisting production with the help of the computational power and data availability that it provides. Technology has also helped other industries such as farming, healthcare, automotive, music, and transportation.

In short, technology has been an indispensable factor in changing various industries and helping them to thrive and survive in better ways than they did before.

Below you will read about some of the hand-picked breakthrough technologies that are included in this year’s MIT Technology Review.

MIT Technology Review

3-D Metal Printing

3-D printing began as far back as 1981, but it was mostly used by hobbyist and also for designing purposes. Currently, however, 3D printing has been viewed as a sensible approach in manufacturing metal parts. 3-D printers can easily make metal objects quickly and, most importantly, in a cheaper way.

Looking at it as a whole, this technology is capable of making large and complex-looking metal objects and parts that are on demand. It could, in turn, potentially transform manufacturing as we know it and open up new doors of possibilities and opportunities.

The current key players of 3-D metal printing are Markforged, Desktop Metal, and GE.

Artificial Embryos

This technological breakthrough has given a completely different meaning to how life can be created. Experts working at the University of Cambridge have cultivated real-looking mouse embryos just by using stem cells. No egg or sperm was required; just stem cells that were harvested from another embryo.

This breakthrough of artificial embryos will certainly open up new doors for us to better understand how life develops. However, it also poses various concerns such as ethical and, surprisingly, even philosophical problems.

The current key players of artificial embryos are the University of Cambridge, University of Michigan, and Rockefeller University.

Artificial Embryos

 

Cloud-based AI Services

Artificial Intelligence has always been making huge splashes in various industries; however, it is dominated by only a select few. Now, by making it a cloud-based service, AI can easily be available to many others and offer the economy a significant boost.

The current key players of Cloud-based AI services are Amazon, Google, IBM, and Microsoft. Together they are working to increase people’s access to machine learning and artificial intelligence and make it relatively affordable and easy to use.

Cloud-based AI Services

Babel Fish Earbuds

Ever since time immemorial, language has always been one of the greatest barriers to communication and it still is. With the help of near-real-time translation, people can have their conversation in different languages and in near real-time, which is not just effective but also efficient.

It has been reported that the earbuds are still in their early stages, but it cannot be denied that the underlying technology in this is truly promising.

Babel Fish Earbuds

These are truly exciting times as technology advances with blinding speed.  It is just as hard to imagine what the next 20 years will bring just as it is was hard for us to imagine what today would be like 20 years ago.  What is truly extraordinary about breakthroughs today is that they are spreading out into different industries rapidly, and are no longer mostly just limited to just a few industries, like the way they were 20 years ago.


adminJune 6, 2018
tot355-1280x764.jpg

7min

Nowadays, artificial intelligence (AI) is being used in a variety of ways, the most common of which is in improving computer programs and technologies. You might have heard of the popular artificial intelligence conversation program SimSimi back in 2012. For those who don’t know this piece of information, SimSimi was created by ISMaker, and it has an application designed for Android, for Windows Phone, and for iOS.

Another artificial intelligence application is in application programs that function as AI assistants, such as Alexa and Siri. These AI assistants help businessmen run their businesses more efficiently. Most of these AI assistants manage client projects, leaving businessmen more time to help their clients and make them feel satisfied and happy.

AI assistants alexa and siri

A Look into AI Assistants

Before we have further discussion, let’s try to learn what an AI assistant is. An AI assistant is an application program that understands natural language voice commands and completes tasks for the user. They use natural language processing, or NLP, to match user text or voice input to executable commands. Most of them continually learn using artificial intelligence techniques including machine learning.

To activate an AI assistant, a keyword may be used. A keyword is a word or group of words such as “Alexa” or “Siri”.

A Deeper Understanding of Alexa and Siri

Amazon’s Alexa is an AI assistant developed by Amazon. It was first used in the Amazon Echo and the Amazon Echo Dot smart speaker, which was developed by Amazon Lab126. Among other things, Alexa is capable of voice interaction, music playback, making to-do-lists, streaming podcasts, and providing weather, traffic, sports, and other real-time information, such as news.

Another great function of Alexa is its capability to control several smart devices while using itself as a home automation system. What’s more amazing about Alexa is that users can extend Alexa’s capabilities by installing “skills”. To make you understand more clearly, skills (also referred to as apps, such as weather programs and audio features) are additional functionalities that are developed by third-party vendors.

Additionally, Amazon allows device manufacturers to integrate Alexa voice capabilities into their own connected products by using the Alexa Voice Service (AVS), which is a cloud-based service that provides application programming interfaces to interact with Alexa.

Whereas Alexa is an AI assistant developed by Amazon, Siri is an AI assistant that is part of Apple Inc.’s iOS, watchOS, macOS, and tvOS. The AI assistant uses voice queries and a natural-language user interface to answer queries, make recommendations, and perform actions by delegating requests to a set of internet services. The software adapts to every user’s unique language usages, searches, and preferences, with continuing use; thus, returned results are individualized.

In actuality, Siri is a spin-off from a project originally developed by the SRI International Artificial Intelligence Center, and its speech recognition engine was provided by Nuance Communications.

Siri supports a wide range of user commands, including performing phone actions, handling device settings, checking basic information, searching the Internet, scheduling events and reminders, navigating areas, and is able to engage with iOS-integrated apps, and in addition, Siri uses advanced machine learning to function.

Amazon’s Alexa

How did Alexa and Siri get their names?

We all have been saying, or sometimes practically shouting, their names at our gadgets, but does anyone ever have an idea where they got their names in the first place? If you still don’t know the answer to that, let me tell you now.

No one has ever revealed who actually developed Siri, but we do know that Siri was eventually bought by Apple for their iOS products. And so, Apple had no say over what to name this piece of technology.

Adam Cheyer, Siri’s co-creator, said that the name Siri was chosen because it was short to type, easy to remember, comfortable to pronounce, and is a unique name.

As for Alexa, apparently, the inspiration for the name came from Star Trek. The characters in Star Trek would say “computer” aloud before issuing a voice command to their spaceship. And so, it gave them an idea. However, “computer” wasn’t a suitable choice for the Amazon Echo since the staff needed a unique word that wouldn’t accidentally be said out loud.

In the end, they eventually settled on Alexa because it’s a unique word and easy to pronounce, but most importantly, it has a letter “X”, giving it a very unique sound.


adminJune 2, 2018
tot332.jpg

7min

With the ever-growing and rapidly-evolving technology we have today, a lot of revolutionary inventions have emerged and one of them is artificial intelligence. We can basically see it in most of our modern gadgets such as mobile phones, smart watches, drones, self-driving cars, speakers, SIRI, and many others. But what happens if something, however remotely trivial or small, goes wrong?

The researchers at MIT Media Lab sounded a little bit scared of their latest invention: a neural network called “Norman”, whom they claim is showing thought processes that are closest to “psychopathic” than any other AI before it. Part of its introduction to the public was staged as an April Fool’s prank on the lab’s official site, on which they claimed that the AI Norman’s thought processes became permanently deranged by constant exposure to the darkest corners of Reddit. While that diagnosis may have been intentionally exaggerated, the part about Norman’s incredibly morbid point of view is terrifyingly real.

What is Artificial Intelligence?

Artificial intelligence, commonly referred to as AI, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by human beings and other animals. Artificial intelligence can also be understood as an area of computer science that emphasizes the creation of smart machines that work and reacts like humans. Some of them are designed for speech recognition, learning, planning, and problem-solving.

Research conducted with regard to artificial intelligence is specialized and highly technical. The main problems of artificial intelligence include programming computers for certain traits such as reasoning, knowledge, planning, learning, perception, problem-solving, and ability to manipulate and move objects.

Benefits and Risks of Artificial Intelligence

There are people who have negative feelings toward artificial intelligence, but what they don’t know is that it can be quite beneficial. First and most important, it produces jobs. Contrary to popular belief, artificial intelligence actually helps companies, such as Google and Netflix, to exist. Actually, Google alone employs more than 70,000 employees. Imagine that 70,000 people are able to have jobs that don’t involve mindless tasks; this is just one benefit of artificial intelligence, and that’s just the success it has had with one company. Artificial intelligence helps businesses grow, develop, and allocate new resources to hire more people. Another benefit that artificial intelligence provides is helping reduce the effect of some of the man’s poor decisions. Other benefits are its convenience, efficiency, and its role in promoting faster advancement in healthcare and technology.

Of course, there will also be risks involved in such advanced technologies. First on the list is that phishing scams could get even worse. They could become even more prevalent and effective thanks to AI. Also, hackers could start using AI like financial firms. Fake news and propaganda are only going to get worse. AI could make weapons more destructive.

Meet Norman, the AI Psychopath

A neural network designed by researchers at MIT Media Lab, Normal is disturbingly different from other types of artificial intelligence (AI). Norman is an algorithm that is specifically trained to understand pictures, but, like its namesake Hitchcock’s Norman Bates, it does not have an optimistic view of the world.

Norman’s point of view was incessantly bleak; it saw dead bodies, blood, and destruction in every image. When it was shown different inkblot drawings and asked what it saw, it answered with some really disturbing responses. A drawing that a regular AI would interpret as “a close up of a vase with flowers” would be interpreted by Norman as “a man shot dead”. A drawing that a regular AI would interpret as “a black and white photo of a small bird” would be interpreted by Norman as “a man getting pulled into a dough machine”. Even when it was shown colored inkblot drawings, Norman would still interpret it as “a man shot dead in front of his screaming wife” and “a man killed by a speeding driver”. Norman is quite biased towards death and destruction because that is all that it knows. AI in real-life situations can be equally biased if it was trained on flawed and bad data.

Apparently, Norman isn’t the first AI to focus on dark subject matters.  Earlier experiments such as “Nightmare Machine”, an AI that learned to recognize images that scared people and turned ordinary photos into horrific ones, and “Shelley”, an AI that was fed text from various horror stories and eventually learned to write her own original tales of terror, came before Norman.


adminMay 31, 2018
tot318-1280x853.jpg

5min

Nvidia Corporation, most commonly known as Nvidia, is an American technology company based in Santa Clara, California. They basically design graphics processing units (GPUs) for the professional and gaming markets, as well as a system on a chip unit for the automotive and mobile computing market.

Ever since 2014, Nvidia has shifted to becoming a platform company focused on four markets: gaming, data centers, professional, and auto. Nvidia is now also focused on artificial intelligence.

Just yesterday, Nvidia launched a monster box called the HGX-2 and it’s exactly the kind of thing that technology nerds would die for. It’s a cloud server that is purported to be extremely powerful in that it combines high-performance computing with artificial intelligence requirements in one phenomenal and compelling package.

Now, let’s see the specifications since I’m sure you are all waiting for it. The Nvidia HGX-2 starts with 16x NVIDIA Tesla V100 GPUs. That is really good for 2 petaFLOPS for artificial intelligence with low precision, 250 teraFLOPS for medium precision, and 125 teraFLOPS for those times when you require the highest precision. It also comes typically with a ½ a terabyte of memory and 12 Nvidia NVSwitches that enables GPU to GPU communications at 300 GB per second. Compared with the HGX-1, which was released last year, they have doubled the capacity of the HGX-2.

The sad part about all of this is you won’t be able to buy one of these boxes. As a matter of fact, Nvidia is distributing them strictly to resellers, all of whom would most likely package all of these boxes and sell them to data centers and cloud providers. “The beauty of this approach for cloud resellers is that when you buy it, they have the entire range of precision in a single box,” Paresh Kharya – Group Product Marketing Manager for Nvidia’s Tesla data center products – said.

“The benefit of a unified platform is that while companies and cloud providers are building their infrastructure, they can standardize on a single unified architecture that supports the whole range of high-performance workloads. So, whether it’s an artificial intelligence or high-performance simulations, the whole range of workloads is now possible in just a single platform,” Kharya further explained.

“In hyper-scale companies or cloud providers, the main benefit that the Nvidia HGX-2 provides is the economies of scale. If they can standardize on the fewest possible architectures, they can ultimately maximize their operational efficiency. What HGX-2 allows them to do is to standardize on that single unified platform,” he added.

While Jensen Huang – Nvidia’s founder and chief executive – said, “The world of computing has changed. CPU scaling has slowed at a time when computing demand is skyrocketing. Nvidia’s HGX-2 with Tensor Core GPUs gives the industry a powerful, versatile computing platform that fuses high-performance computing and artificial intelligence to solve the world’s grand challenges.”

But this GPU-driven technological powerhouse is just the beginning. Although it is incredibly capable in its own right, it could also just simply act as the basis for even more powerful arrays of technologically-advanced creations.

The Nvidia HGX-2 powered serves will be available in the market later this year.



About us


CONTACT US

thetripontech@gmail.com