Search This Blog

Wednesday, October 28, 2015

2015’s most critical information technology trends

Technology was dominated by several prevalent trends in 2014 – from the Internet of Things (IoT) and big data to the cloud and cyber security, it seems like there was barely a day without one of these trends creeping up in the news.
Now we have entered 2015, these trends are still at the forefront of discussion, with many exciting opportunities, innovations and iterations being predicted for this year.

1. Internet of Things investment will continue to increase

In 2014 there was a rapid increase in IoT solutions being deployed to advance business intelligence. ABI Research reported a 20% increase in IoT connected devices in 2014 from 2013. This year, Cisco predicts there will be 25 billion connected devices, which will double to 50 billion by 2020. Information Age suggests IoT will revolutionise business by allowing companies to improve value propositions, engage with customers on levels previously unavailable and build entirely new revenue streams.
>See also: 2015 technology trends – what are the security implications?
So far, investment in IoT has mainly come from the IT and telecoms industry, which will naturally benefit from the increase in data generated and application capabilities for mobile devices. In 2015, spectators predict investment in IoT will increase outside this industry. The retail industry is one sector in particular that is looking to tap into sensor data generated via wearable technology to provide highly targeted products and services to their customers.
As businesses look to IoT technologies to provide more insight, there is an ever-increasing demand for analysts capable of transforming IoT data into actionable business intelligence. There is a shortage in data scientists, for example, with positions increasing in the UK by a staggering 1005% over the last two years. This makes data science one of the fastest growing and in-demand professions in the UK to date, and is predicted to increase further going forward into 2015.

2. Big data requires big investment in infrastructure and skills

Big data as a concept is ever evolving as the capacity to mine structured, semi structured and unstructured data increases. In 2014, organisations were making more informed business decisions and becoming more intelligent as they interact with their customers. For example, more sophisticated ‘recommendation engines’ anticipating users’ interests more accurately for services such as Netflix, Amazon and Google. Further, credit reference agencies have been using big data to inform on lending decisions by developing the algorithms used to generate credit ratings. Retail, logistics and budget planning have all seen significant advancement last year due to greater business intelligence.
In 2015, spending on big data related software is expected to increase to around £80 billion globally as reported by the IDC. IoT will become the next critical focus for data and analytics services with IDC predicting 30% CAGR over the next five years. The increasing influx of data available to organisations will require the infrastructure being used to house, process, analyse and visualise intelligence to expand. The IDC predicts that rich media analytics will be the driver behind many big data projects, expecting this area to at least triple in size.
The increased demand for greater sophistication in analysis and data consumption will require organisations to refine talent acquisition strategies to compete in the skills gap. A recent survey of 300 UK businesses, conducted by SAP, showed that while 92% of organisations have seen their data grow over the last 12 months, most were experiencing barriers when trying to use the information. Further, 42% of organisations saw lack of time and resource as the biggest challenge and 75% believed new data science skills are needed within their organisation.
3. More business operations will gravitate to the cloud
2014 was another big year for cloud computing. The IDG Enterprise Cloud Computing Study found that 69% of firms participating had at least something in the cloud, up 8% from 2013. The reason behind the movement to cloud-based operations are numerous, from IT agility to IT innovation and employee collaboration, cloud computing is becoming the hub for operational infrastructure. Big data, generated through IoT, is an important driver for organisations to move to the cloud.
This year, there will be greater adoption of cloud and in-house/cloud hybrid hosted operations among businesses. The IDG study also found that 61% of organisations will invest in emerging technologies to improve their existing cloud solutions. Technologies such as software-defined networking (SDN) and network functions virtualisation (NFV) are being looked into to give greater agility to cloud investments.

4. Cyber security industry must invest in skilled talent

2014 is now being dubbed ‘the year of the breach’ and it is not difficult to see why. Last year sensitive data was being leaked at Google, Apple and most recently at Sony, to name but a few leading tech giants. Security weaknesses have been marked down to a number of key areas such as misconfiguration issues, third party providers, lack of network diversity and most worrying of all, lack of qualified security talent.
The cyber security skills gap is perhaps the underlying issues behind the ‘the year of the breach’ having a knock-on effect on industry and the economy. Cyber security skills are a global priority but, with a lack of consistency in accepted career definitions, organisations are experiencing difficulties in attracting new talent and progressing existing professionals. To offset the skills deficit, talent from the ‘gaming’ industry are being brought into the security sphere and their skills adapted for this arena.
Now in 2015, the information security industry has an opportunity to re-define itself and build on the negative global coverage it observed last year to attract new talent. Investment, employment branding and clearly defined career paths are essential if organisations are to reduce the deficit in the skills gap and avoid seeing their own brand in the papers due to the latest leak.
The Internet Systems Security Association (ISSA) identified the need for an internationally accepted framework that would define the cyber security career for individuals in the profession. To attract new entrants and so that pros can advance in their career, ISSA have developed the Cybersecurity Career Lifecycle (CSCL). This pro-active approach to industry development could go a long way to help fill the estimated 300,000 to 1,000,000, and rising, currently vacant global cyber security positions.

See also: The 2015 cyber security roadmap

As we look forward to the year ahead there are many fantastic opportunities in IT to look forward to – from the existing trends of 2014 to the emerging technologies yet to be discussed. The IT industry is growing at an exponential rate, with demand, investment and technological capability the three pillars of support driving the sustainability in growth.
However, the barriers to the industry are clear and present dangers. The shortage in skills and talent capable or realising the expectations for these pressing technologies must be addressed on an educational, cultural and organisational level.
Nurturing computing skills must be as core to the UK syllabus as English, Math and Science to provide the next generation with the tools that they will need in their future careers, no matter the profession.
Culturally, there appears to be a strong requirement to make a career in IT more attractive to young women. A study by the Joint Council for Qualifications found less than 8% of last year’s ‘A' Level computing students were female and declining still. From a total of 4,171 students studying Computing, only 314 were female. That's a 75% decrease over the last 10 years. These figures are concerning to say the least with the skills gap in this sector increasing.

Sourced from Mark Braund, CEO, InterQuest Group

Thursday, August 14, 2014



With the advent of the Internet and the plurality and variety of fancy applications it brought with it, the demand for more advanced services on cellular phones is increasingly becoming urgent. Unfortunately, so far the introduction of new enabling technologies did not succeed in boosting new services. The adoption of Internet services has shown to be more difficult due to the difference between the Internet and the mobile telecommunication system. The goal of this paper is to examine the characteristics of the mobile system and to clarify the constraints that are imposed on existing mobile services.  The paper will also investigate successively the enabling technologies and the improvements they brought. Most importantly, the paper will identify their limitations and capture the fundamental requirements for future mobile service architectures namely openness, separation of service logic and content, multi-domain services, personalization, Personal Area Network (PAN)-based services and collaborative services. The paper also explains the analysis of current mobile service architecture such as voice communication, supplementary services with intelligent network, enabling services on SIM with SIM application tool kit, text services with short message service, internet services with WAP and dynamic applications on mobile phones with J2ME.

Sunday, August 10, 2014



The Third-Generation (3G) wireless technologies offer wireless web, SMS, MMS, EDGE, WCDMA, GPRS etc. 4G is a packet switched technology, uses bandwidth much more efficiently, allowing each user's packets to compete for available bandwidth. It solves the non-standardization problems associated with 3G. Data transfer rate will be 20 to 100 Mbps, which is 10 to 50 times than 3G and 10-20 times faster than ADSL. Operating frequency range will be 3 to 10 GHz and the IPv6 protocol will be used. In this paper, fundamentals of 4G and their various proposed architectures are explained. In India it can be used to network rural and urban areas, reduce cost of communication, flourish educational activities, facilitate research and development, faster internet connectivity, more cellular options, real time information systems, crisis management, Tele-medicines and many more. The present 3G networks need not to be discarded, and can be used in conjunction with 4G technology. There are various architectures proposed that can be used to deploy 4G.



           In this age of universal electronic connectivity, of viruses and hackers, of electronic eavesdropping and electronic fraud, there is indeed no time at which security of information does not matter. The explosive growth of computer systems and their interconnections via networks has increased the dependency on the information stored and communication using these systems. This has led to a heightened awareness of the need to protect the data transmitted.
Thus the field of cryptography has got more attention nowadays. More and more complex techniques for encrypting the information are proposed every now and then. Some advanced encryption algorithms like RSA, DES; AES etc. which are extremely hard to crack have been developed. But, as usual, when a small step is made to improve the security, more work is done in the opposite direction by the hackers to break it. Thus they are able to attack most of these algorithms and that too, successfully. Even complex algorithms like RSA are no exception to this.

So, to deceive the hackers, people have started to follow a technique called ‘Steganography’. It is not an entirely new technique and has been in the practice from ancient times. In this method, the data is hidden behind unsuspecting objects like images, audio, video etc. so that people cannot even recognize that there is a second message behind the object. Images are commonly used in this technique.



        Robots are becoming essential and integral intelligent devices that are used to perform a variety of tasks, which sometimes are beyond the scope of human beings. They find extensive use in areas like industrial automation, nuclear installation, and pharmaceutical and medical fields, space research etc. The development of an autonomous mobile robot able to vacuum a room or even an entire firm is not a trivial challenge. In order to tackle such a task, so that it could be completed with some simplifications and also assumptions were made to the designer’s initial idea of an “ideal” autonomous vacuum cleaner. In this way, some functional requirements that would improve the robot performance were not taking into account due to their inherent complexity or to their mechanical implications. Probably the decision that the most affects the robot complexity is the ability of mapping the environment so that it would exhibit a much better efficiency when compared with the minimalist approach as the one followed (random navigation).With the aim of keeping our robot as simple as possible, while able to perform the initial goals, i.e. an autonomous vacuum cleaner robot able to randomly navigate through a room or a house with the minimum human assistance, the following specifications were found:

Obstacle avoidance
Collision detection
Autonomous battery charging
Autonomous dust bag dump

These specifications correspond to some of the expected behaviors that will be programmed into the robot. Other behaviors that will increase the overall performance of the robot, such as self calibration of the sensors and navigation with some memory (not completely random) were also considered. During robot moving, the obstacle is detected by the two sensors, one is ultrasonic sensor and another one is infrared sensor. Here these sensors give input signal to the controlling unit (i.e., 89C51 microcontroller). According to the input signal coming from these sensors, the controller unit interrupts the drive unit. Drive unit is fitted in the vehicle to drive the mobile platform. The driving unit works according to the signal form the controller unit. Then the vehicle moves left or right based on the obstacle detected by the sensor. Here the vacuum cleaning used by a hand vacuum cleaner, placed over a mobile platform Based on the vehicle movement dust are cleaned.



                    This paper considers the development of a brain driven car, which  would  be  of  great  help  to  the  physically disabled people. Since these cars will rely only on what the individual is thinking they will hence not require any physical movement on the part of the individual. The car integrates signals from  a variety of sensors like video, weather   monitor,   anti-collision  etc. it also  has  an automatic navigation system in case of emergency. The car works on the asynchronous mechanism of artificial intelligence. It’s a great advance of technology which will make the disabled, abled.


Abstract –

As the technology advances the human needs to grow towards comfort. KMAIL, A Dynamic mail using voice logging in cloud computing environment. Integrating voice communication into business processes can accelerate resolution time, reduce mistakes, and establish a full audit-trail of the interactions.  In this project, we are proposing a dual way of accessing the mail using Speech synthesis & a normal way in a cloud computing environment. We propose a generic framework consisting of a VXML-based service specification language. Service providers using the proposed service-oriented architecture can offer to their customers a protocol-neutral Web Service interface, thus enabling the deployment of a general and integrated cloud solution for mail.
Key words: KMAIL, VXML, cloud computing, speech synthesis.
I.                   INTRODUCTION
Today, people spend most of their time interacting with computers. Life is running at a microchip speed. Over the last decade, these electronic tiny minuscule signals have fundamentally revolutionized the way we live. People are spending more hours per day with machines than humans. They communicate through mails across the globe for official and unofficial information. Email & web have become part of our life. These mailing systems are developed by different organization with different modules and different feasibilities. But fail to reach the users who are physically disabled and senior citizens. And also fail to provide better ease for the users. The frequent and prolonged computer sessions may pose physical health risks such as visual strain, harmful effects of radiation, and posture and skeletal problems.
This problem can be overcome to some extent by enhancing the applications with speech synthesis and speech recognition systems. Here we propose a KMAIL reader system which is a unique system developed for members of the system can exchange information and is also supported by a reader which would read the contents of the mail by the automatic speech recognition.
The cloud computing database system is used for this system, in which we can enable Automatic Speech Recognition (ASR). The selected mail in the inbox is processed to identify the name of the sender and recipient, subject and the message. This text is normalized and segmented at syllable level. Its corresponding syllable units are selected based on the context and selected units are concatenated to get the wave file for the given text.
It provides the user to register him as a valid user of the KMAIL service and then allows the user to access the modules of the system where the valid user can send mails, delete mails and receive mails in the users language. The mails which the user receives are converted from text to speech by using concatenative speech synthesis approach and an appropriate wave file is generated and then the speech of the text is played.

The paper is organized as follows. Section 2 gives the details of KMAIL architecture. Section 3 gives the details of the TTTS system architecture and different modules in the system. The conclusion in section 4 followed by future scope in section 5.

Green Computing

Environmental issues are receiving        unprecedented attention from businesses and governments around the world. In a special 2005 address to the World Economic Forum in davos, then-Prime Minister of the United Kingdom, Tony Blair, argued that the weight of evidence is such that swift action must be taken to address global warming. This comment came alongside a marked shift in environmental dialogue across societies and in business leadership circles. As concern for climate change and sustainability continues to grow, and actions now ramp up, businesses are grappling with reducing carbon footprints while remaining profitable.
Moreover, in 2009, businesses feel the negative impact of our economic climate. Senior leaders – in the corporate office and in IT – are surveying their businesses for readily achievable cost savings to make up for tightened budgets and profit margins. IT departments, having run lean in the past, are on the hunt for new initiatives that reduce costs without compromising business lue.
Many businesses have discovered that Green computing initiatives offer costs savings benefits while reforming the organization, meeting stakeholder demands and complying with laws and regulations. In this study, IBM and Info-Tech Research Group find that businesses who complete Green computing initiatives realize significant cost savings alongside superior environmental performance.
Green computing: A Working Definition
Green computing is comprised of initiatives and strategies that reduce the environmental footprint of technology. This arises from reductions in energy use and consumables, including hardware, electricity, fuel and paper – among others. Because of these reductions, Green computing initiatives also produce cost savings in energy use, purchases, management and support, in addition to environmental benefits. Beyond cost savings and environmental benefits, some initiatives may address stakeholder and regulatory needs and demands.
For example, server virtualization allows businesses to reduce the capital cost of future server purchases, and the operational costs of energy, maintenance and management. Electricity footprints and the amount of equipment needing future recycling are simultaneously reduced, and often, the business realizes incentives or rebates for saving energy from local utilities or governments.


Network Security Traceback for Distributed Denial of Service

Problem statement: Distributed Denial of Service (DDoS) is a serious threat to the internet world that denies the legitimate users from being access the internet by blocking the service. Approach: In this study, we proposed a novel approach, Geographical Division Traceback (GDT) for efficient IP traceback and DDoS defense methodology. DDoS attack is one of the most serious and threatening issue in the modern world web because of its notorious harmfulness and it causes the delay in the availability of services to the intended users. Results: Unless like a traditional traceback methodology, GDT proposes a quick mechanism to identify the attacker with the help of single packet which imposes very less computational overhead on the routers and also victim can avoid receiving data from the same machine in future. This mechanism for IP Traceback utilizes the geographical information for finding out the machine which was responsible for making the delay was proposed. The IP packet consists of the subspaces details in which the path denotes. It helps to make sure whether the packet travels in the network and falls within any one of the subspaces. The division of subspaces leads to the source of attack system. Conclusion/Recommendations: This method possesses several advantageous features such as easy traversing to the attacker and improves the efficiency of tracing the attacker system.

Key words: Network security, Distributed Denial of Service, IP traceback, packet marking, Geographical Division Traceback (GDT)



The Third-Generation (3G) wireless technologies offer wireless web, SMS, MMS, EDGE, WCDMA, GPRS etc. 4G is a packet switched technology, uses bandwidth much more efficiently, allowing each user's packets to compete for available bandwidth. It solves the non-standardization problems associated with 3G. Data transfer rate will be 20 to 100 Mbps, which is 10 to 50 times than 3G and 10-20 times faster than ADSL. Operating frequency range will be 3 to 10 GHz and the IPv6 protocol will be used. In this paper, fundamentals of 4G and their various proposed architectures are explained. In India it can be used to network rural and urban areas, reduce cost of communication, flourish educational activities, facilitate research and development, faster internet connectivity, more cellular options, real time information systems, crisis management, Tele-medicines and many more. The present 3G networks need not to be discarded, and can be used in conjunction with 4G technology. There are various architectures proposed that can be used to deploy 4G.