Cutting Through the Mire of Tablet Issue Production

first_imgEach publisher has approached tablets at its own pace, with its own purpose. The result has left a scattered set of protocols across the industry.The International Digital Enterprise Alliance (IDEAlliance), an association serving players across the digital media supply chain, is attempting to simplify the process of tablet issue production by eliminating many of the competing formats and workflows. The goal is an industry standard called OpenEFT—guidelines to direct the packaging, delivery and display of digital magazines for everyone in the ecosystem. OpenEFT’s final draft was unveiled late last month.“We, as publishers, would like to be able to provide a designed-for-tablet, interactive edition to all the newsstands,” says Sean Keefe, executive director of publishing technology for Hearst Magazines. “But right now, not all of them take the same file formats.”The benefits for publishers are two-fold. Tablet issue production would become a more efficient process, while the barriers to third-party innovation would be lowered. Tablet issue production can be convoluted now. Hearst currently produces up to three formats (and several variants) of its magazines, depending on the brand and the newsstand they’re working with; Next Issue Media, a digital newsstand, is forced to adapt about six formats for its storefront. Many of those conversions are labor intensive and require quality assurance testing at multiple points.Ideally, says Keith Barraclough, CTO and vice president of products for Next Issue, the exchange of files would be simplified, QA would only be needed once and the process could be automated.“Whether OpenEFT can do all this as it goes through its standardization process and tools and manufacturers come along and adopt, that’s all a big ‘TBD,’” he says. “But that’s the nirvana we’re looking for.”An open specification already exists, called ePub, but it was built to handle books, not magazines.“The orientation toward imagery, layout and the subtlety of the navigation of a magazine is something that’s evolved more,” Barraclough says.While Dianne Kennedy, vice president of emerging technologies for IDEAlliance, says OpenEFT is closely modeled after ePub, she adds that the need for tablet-optimized ad units is another major reason the book-centric format needed to be tweaked for digital magazines.Magazine staff have to manipulate the units from the agency, often without being exactly sure of how the final product was supposed to render. The costs and confusion make their use rare.“Magazines, unlike books, rely a lot on the ad model,” Kennedy says. “There is no specification for the exchange and rendering of this interactive content, so the magazines have been limiting the number of interactive ads they will accept.”Regardless of how or why they started with tablet editions, publishers will agree that improving production efficiency is beneficial.Now, it’s up to them to adopt the standard.last_img read more

US needs an internet data privacy law GAO tells Congress

first_imgThe federal government’s chief auditor has recommended Congress consider developing legislation to beef up consumers’ internet data privacy protections. much like the EU’s General Data Protection Regulation. The recommendation was included in a 56-page report (PDF) issued Wednesday by the Government Accountability Office, the government agency that provides auditing, evaluation and investigative services for Congress. The report was prepared at the request two years ago by Rep. Frank Pallone Jr. (D-N.J.), chairman of the House Energy and Commerce Committee, which has scheduled a hearing to discuss the subject for Feb. 26.”Since I requested this report, the need for comprehensive data privacy and security legislation at the federal level has only become more apparent,” Pallone said in a statement. “From the Cambridge Analytica scandal to the unauthorized disclosures of real-time location data, consumers’ privacy is being violated online and offline in alarming and dangerous ways.” In making its recommendation, the GAO cited Facebook’s Cambridge Analytica scandal, saying the episode was just one of many recent internet privacy incidents in which users’ personal data may have been improperly disclosed. The GAO suggests giving the Federal Trade Commission more authority over internet privacy enforcement but also raised concerns about the commission’s enforcement abilities. Noting that the FTC is already the de facto authority over internet privacy in the US, the GAO found that the FTC filed 101 internet privacy enforcement actions in the past decade. Nearly all of those cases resulted in settlement agreements, and in most cases, no fines were issued because the FTC lacked the authority in those cases. “Recent developments regarding Internet privacy suggest that this is an appropriate time for Congress to consider comprehensive Internet privacy legislation,” the GAO report said. “Although FTC has been addressing Internet privacy through its unfair and deceptive practices authority, among other statutes, and other agencies have been addressing this issue using industry-specific statutes, there is no comprehensive federal privacy statute with specific standards.”The report was issued a day before news emerged that the FTC and Facebook were negotiating a multibillion-dollar fine to settle an investigation into the social network’s privacy practices. The exact amount hasn’t been determined, but it would be the largest fine ever imposed by the agency.The FTC began investigating Facebook last year after it was revealed that Cambridge Analytica, a digital consultancy linked to the Trump presidential campaign, improperly accessed data from as many as 87 million Facebook users. The agency is looking into whether Facebook’s actions violated a 2011 agreement with the government in which it pledged to improve its privacy practices. Facebook has said it didn’t violate the consent decree. Creating a US internet privacy law like the GDPR has won some support from tech leaders. Apple CEO Tim Cook has praised the effective data privacy regulation and said he supports a “comprehensive federal data privacy law” in the US.”It is up to us, including my home country, to follow your lead,” he told the European Parliament in October. 3 Tags Tech Industry Security Comments Share your voice Privacylast_img read more

Why TensorFlow always tops machine learning and artificial intelligence tool surveys

first_imgTensorFlow is an open source machine learning framework for carrying out high-performance numerical computations. It provides excellent architecture support which allows easy deployment of computations across a variety of platforms ranging from desktops to clusters of servers, mobiles, and edge devices. Have you ever thought, why TensorFlow has become so popular in such a short span of time? What made TensorFlow so special, that we seeing a huge surge of developers and researchers opting for the TensorFlow framework? Interestingly, when it comes to artificial intelligence frameworks showdown, you will find TensorFlow emerging as a clear winner most of the time. The major credit goes to the soaring popularity and contributions across various forums such as GitHub, Stack Overflow, and Quora. The fact is, TensorFlow is being used in over 6000 open source repositories showing their roots in many real-world research and applications. How TensorFlow came to be The library was developed by a group of researchers and engineers from the Google Brain team within Google AI organization. They wanted a library that provides strong support for machine learning and deep learning and advanced numerical computations across different scientific domains. Since the time Google open sourced its machine learning framework in 2015, TensorFlow has grown in popularity with more than 1500 projects mentions on GitHub. The constant updates made to the TensorFlow ecosystem is the real cherry on the cake. This has ensured all the new challenges developers and researchers face are addressed, thus easing the complex computations and providing newer features, promises, and performance improvements with the support of high-level APIs. By open sourcing the library, the Google research team have received all the benefits from a huge set of contributors outside their existing core team. Their idea was to make TensorFlow popular by open sourcing it, thus making sure all new research ideas are implemented in TensorFlow first allowing Google to productize those ideas. Read Also: 6 reasons why Google open sourced TensorFlow What makes TensorFlow different from the rest? With more and more research and real-life use cases going mainstream, we can see a big trend among programmers, and developers flocking towards the tool called TensorFlow. The popularity for TensorFlow is quite evident, with big names adopting TensorFlow for carrying out artificial intelligence tasks. Many popular companies such as NVIDIA, Twitter, Snapchat, Uber and more are using TensorFlow for all their major operations and research areas. On one hand, someone can make a case that TensorFlow’s popularity is based on its origins/legacy. TensorFlow being developed under the house of “Google” enjoys the reputation of the household name. There’s no doubt, TensorFlow has been better marketed than some of its competitors. Source: The Data Incubator However that’s not the full story. There are many other compelling reasons why small scale to large scale companies prefer using TensorFlow over other machine learning tools TensorFlow key functionalities TensorFlow provides an accessible and readable syntax which is essential for making these programming resources easier to use. The complex syntax is the last thing developers need to know given machine learning’s advanced nature. TensorFlow provides excellent functionalities and services when compared to other popular deep learning frameworks. These high-level operations are essential for carrying out complex parallel computations and for building advanced neural network models. TensorFlow is a low-level library which provides more flexibility. Thus you can define your own functionalities or services for your models. This is a very important parameter for researchers because it allows them to change the model based on changing user requirements. TensorFlow provides more network control. Thus allowing developers and researchers to understand how operations are implemented across the network. They can always keep track of new changes done over time. Distributed training The trend of distributed deep learning began in 2017, when Facebook released a paper showing a set of methods to reduce the training time of a convolutional neural network model. The test was done on RESNET-50 model on ImageNet dataset which took one hour to train instead of two weeks. 256 GPUs spread over 32 servers were used. This revolutionary test has open the gates for many research work which have massively reduced the experimentation time by running many tasks in parallel on multiple GPUs. Google’s distributed TensorFlow has allowed all the researchers and developers to scale out complex distributed training using in-built methods and operations that optimizes distributed deep learning among servers. . Google’s distributed TensorFlow engine which is part of the regular TensorFlow repo, works exceptionally well with the existing TensorFlow’s operations and functionalities. It has allowed exploring two of the most important distributed methods: Distribute the training time of a neural network model over many servers to reduce the training time. Searching for good hyperparameters by running parallel experiments over multiple servers. Google has given distributed TensorFlow engine the required power to steal the share of the market acquired by other distributed projects such as Microsoft’s CNTK, AMPLab’s SparkNet, and CaffeOnSpark. Even though the competition is tough, Google has still managed to become more popular when compared to the other alternatives in the market. From research to production Google has, in some ways, democratized deep learning., The key reason is TensorFlow’s high-level APIs making deep learning accessible to everyone. TensorFlow provides pre-built functions and advanced operations to ease the task of building different neural network models. It provides the required infrastructure and hardware which makes them one of the leading libraries used extensively by researchers and students in the deep learning domain. In addition to research tools, TensorFlow extends the services by bringing the model in production using TensorFlow Serving. It is specifically designed for production environments, which provides a flexible, high-performance serving system for machine learning models. It provides all the functionalities and operations which makes it easy to deploy new algorithms and experiments as per changing requirements and preferences. It provides an excellent feature of out-of-the-box integration with TensorFlow models which can be easily extended to serve other types of models and data. TensorFlow’s API is a complete package which is easier to use and read, plus provides helpful operators, debugging and monitoring tools, and deployment features. This has led to growing use of TensorFlow library as a complete package within the ecosystem by the emerging body of students, researchers, developers, production engineers from various fields who are gravitating towards artificial intelligence. There is a TensorFlow for web, mobile, edge, embedded and more TensorFlow provides a range of services and modules within their existing ecosystem making them as one of the ground-breaking end-to-end tools to provide state-of-the-art deep learning. TensorFlow.js for machine learning on the web JavaScript library for training and deploying machine learning models in the browser. This library provides flexible and intuitive APIs to build and train new and pre-existing models from scratch right in the browser or under Node.js. TensorFlow Lite for mobile and embedded ML It is a TensorFlow lightweight solution used for mobile and embedded devices. It is fast since it enables on-device machine learning inference with low latency. It supports hardware acceleration with the Android Neural Networks API. The future releases of TensorFlow Lite will bring more built-in operators, performance improvements, and will support more models to simplify the developer’s experience of bringing machine learning services within mobile devices. TensorFlow Hub for reusable machine learning A library which is used extensively to reuse machine learning models. Thus you can transfer learning by reusing parts of machine learning models. TensorBoard for visual debugging While training a complex neural network model, the computations you use in TensorFlow can be very confusing. TensorBoard makes it very easy to understand and debug your TensorFlow programs in the form of visualizations. It allows you to easily inspect and understand your TensorFlow runs and graphs. Sonnet Sonnet is a DeepMind library which is built on top of TensorFlow extensively used to build complex neural network models. All of this factors have made the TensorFlow library immensely appealing for building a wide spectrum of machine learning and deep learning projects. This tool has become a preferred choice for everyone from space research giant NASA and other confidential government agencies, to an impressive roster of private sector giants. Road Ahead for TensorFlow TensorFlow no doubt is better marketed compared to the other deep learning frameworks. The community appears to be moving very fast. In any given hour, there are approximately 10 people around the world contributing or improving the TensorFlow project on GitHub. TensorFlow dominates the field with the largest active community. It will be interesting to see what new advances TensorFlow and other utilities make possible for the future of our digital world. Continuing the recent trend of rapid updates, the TensorFlow team is making sure they address all the current and active challenges faced by the contributors and the developers while building machine learning and deep learning models. TensorFlow 2.0 will be a major update, we can expect the release candidate by next year early March. The preview version of this major milestone is expected to hit later this year. The major focus will be on ease of use, additional support for more platforms and languages, and eager execution will be the central feature of TensorFlow 2.0. This breakthrough version will add more functionalities and operations to handle current research areas such as reinforcement learning, GANs, building advanced neural network models more efficiently. Google will continue to invest and upgrade their existing TensorFlow ecosystem. According to Google’s CEO, Sundar Pichai “artificial intelligence is more important than electricity or fire.” TensorFlow is the solution they have come up with to bring artificial intelligence into reality and provide a stepping stone to revolutionize humankind. Read more The 5 biggest announcements from TensorFlow Developer Summit 2018 The Deep Learning Framework Showdown: TensorFlow vs CNTK Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligencelast_img read more

Billy Bishop Airport introduces enhanced screenings for ferry passengers

first_img Posted by Share << Previous PostNext Post >> Billy Bishop Airport introduces enhanced screenings for ferry passengers Tuesday, March 13, 2018 center_img TORONTO — Billy Bishop Toronto City Airport (YTZ) has implemented new enhanced security procedures that may require passengers to go through random screenings.Effective immediately, the new procedures were put in place by Transport Canada under the Domestic Ferry Security Regulations. According to the airport, this may result in random baggage/belongings checks of some passengers for the presence of explosives.“The random screening will be completed by swabbing the exterior of the baggage and/or belongings and analyzing the swab taken with a portable detection device,” said the airport on its website. “Passengers may expect to be approached by Billy Bishop Airport Security staff to participate in the screening process prior to boarding the ferry.”Passengers can also access the airport via the pedestrian tunnel that connects the facility to the mainland, however the airport has yet to confirm whether the security screenings will be applicable to this access point as well.More news:  Honolulu authorities investigate arsons at 3 Waikiki hotels; no injuries reportedThe last time YTZ updated its security measures was in July 2017 when it was authorized by the Canadian Air Transport Security Authority (CATSA) to conduct a secondary search and/or additional screening of electronics the size of a cell phone or larger. Tags: Airports, Billy Bishop Airport, Toronto Travelweek Group last_img read more