Search Results

Search found 8643 results on 346 pages for 'listening platform'.

Page 129/346 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • What are your suggestions on learning how to think?

    - by Jonathan Khoo
    First of all, this is not the generic 'make me a better programmer' question, even though the outcome of asking this question might seem similar to it. On programmers.SE, I've read and seen these get closed here, here, here, here, and here. We all know there are a multitude of generic suggestions to hone your programming skills (e.g reading SO, reading recommended books, following blogs, getting involved in open-source projects, etc.). This is not what I'm after. I also acknowledge the active readership on this web site and am hoping it works in my favour by yielding some great answers. From reading correspondence here, there appears to be a vast number of experienced people who are working, or have worked, programming-related fields. And most of you can convey thoughts in an eloquent, concise manner. I've recently noticed the distinction between someone who's capable of programming and a programmer who can really think. I refuse to believe that in order to become great at programmer, we simply submit ourselves to a lifetime of sponge-like behaviour (i.e absorb everything related to our field by reading, listening, watching, etc.). I would even state that simply knowing every single programming concept that allows you to solve problem X faster than everyone around you, if you can't think, you're enormously limiting yourself - you're just a fast robot. I like to believe there's a whole other face of being a great programmer which is unrelated to how much you know about programming, but it is how well you can intertwine new concepts and apply them to your programming profession or hobby. I haven't seen anyone delve into, or address, this facet of the human mind and programming. (Yes, it's also possible that I haven't looked hard enough too - sorry if that's the case.) So for anyone who has spent any time thinking about what I've mentioned above - or maybe it's everyone here because I'm a little behind in my personal/professional development - what are your suggestions on learning how to think? Aside from the usual reading, what else have you done to be better than the other people in your/our field?

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Real or False Recovery? Economic 'tea-leaves'

    - by [email protected]
    "Information-technology is allowing the city's economy to speak to us in lots of different ways," Mr. Egan said. "We just need to find new ways of listening." Source: "New Way to Read Economy" WSJ_ARTICLE  April 8th, Carli Tuna, Blog by ARC's Steve Banker Apr 12, 2010 Alan Greenspan used cardboard box purchases and other 'source-commodity' indicators. The Carli Tuna WSJ article said that truck diesel fuel sales are a reliable indicator. What factor do you and your company use as future forward indicators? .. is it quotes, perhaps calls into the call center or sales activity?  Is your business moving to the internet and your supply chain driven by your iStore?  How do your distributors, retailers and supply chain partners provide the 'side-line' signals to you to either ramp up or contract production? With competition being only one click away, organizations need to know with higher degrees of certainty, what the econmic 'tea-leaves' are telling us and how firms need to react with production and shipping forecasts.  Firms using the latest forecasting and supply chain analytical (Bus.Intelligence) tools and technologies appear to be leading their markets "Had we been aware of that data in 2008," Mr. Leamer said, "we would have made a different call." .        

    Read the article

  • Managing Social Relationships for the Enterprise – Part 1

    - by kellsey.ruppel
    By Reggie Bradford, Senior Vice President, Oracle  Today, Mark Hurd, President of Oracle, Thomas Kurian, Executive Vice President of Oracle and I discussed the strategic importance of how social media is impacting the enterprise and how it is changing the way customers, prospects employees and investors interact with brands worldwide.  Oracle understands that the consumer is in control and as such, brands must evolve and change to meet growing needs. In addition, according to social media thought leader and Analyst from Altimeter Group, Jeremiah Owyang, companies now average 178 corporate-owned social media accounts. When Oracle added leading social marketing, listening analytics and development tools from Vitrue, Collective Intellect and Involver to its Oracle’s Cloud Services Suite we went beyond providing a single set of tools. We developed an entire framework to include a comprehensive social relationship management suite to help companies move beyond the social enterprise and achieve the social-enabled enterprise.  The fundamental shift from transaction to engagement means that enterprises need not only a social strategy, but should also ensure that the information and data received from social initiatives flow back to marketing, sales, support and service. Doing so enables companies to deliver a proactive and compelling experience and provides analytics to turn engagement into opportunity – and ultimately that opportunity into revenue.  On September 13, 2012, I am delighted to sit down with Jeremiah to further the discussion about how enterprises are addressing social media strategies and managing content.  In addition, we will be taking your questions after the webinar via Twitter (@Oracle, @ReggieBradford, @cfinn, @jowyang). Use #oracle and #socbiz to submit questions and follow the conversation. I look forward to speaking with you and answering your questions online.  For more information about becoming a social-enabled enterprise, visit www.oracle.com/social. And don’t miss the insights of other social business thought leaders at www.oracle.com/goto/socialbusiness.

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • I spoke at SQL Saturday #77 and all I got was this really awesome speaker's shirt!

    - by Most Valuable Yak (Rob Volk)
    Yeah, it was 2 weeks ago, but I'm finally blogging about something! I presented Revenge: The SQL! at SQL Saturday #77 in Pensacola on June 4.  The session abstract is here, and you can download the slides from that page too.  You can see how I look in the speaker's shirt here. Overall it went pretty well.  I discovered a new bit of evil just that morning and in a carefully considered, agonizing decision-making process that was full documented, tested, and approved…nah, I just went ahead and added it at the last minute.  Which worked out even better than (not) planned, since it screwed me up a bit and made my point perfectly.  I had a few fans in the audience, and one of them recorded it for blackmail material posterity. I'd like to thank Karla Landrum (blog | twitter) and all the volunteers for putting together such a great event, and for being kind enough to let me present. (Note to Karla: I'll get the next $100 to you as soon as I can.  Might need a few extra days on the next $100.) Thanks to Audrey (blog | twitter), Peg, and Dorothy for attending and keeping the heckling down.  Thanks also to Aaron (blog | twitter) for providing room and board and also not heckling.  Thanks to Julie (blog | twitter) for coming up with the title for the presentation.  (boo to Julie for getting sick and bailing out on us)  And thanks to all of them for listening to a preview and offering their suggestions and advice! Cross your fingers that I get accepted at SQL Saturday 81 in Birmingham, SQL Saturday 85 in Orlando, or SQL Saturday 89 in Atlanta, or just attend them anyway!

    Read the article

  • Welcome to JavaOne!

    - by marius.ciortea
    Welcome to this year's JavaOne conference! We are glad you dropped by. We want to keep you informed of all the happenings around JavaOne: all the events leading up to the conference and all the events during the conference week itself. We'll cover announcements, news, planning (but we won't make you go to any meetings), and snafus (nothing that makes us look too bad, of course). We'll even throw in a contest or two to make sure you are paying attention. We'll post a couple of times a week, and then more frequently as we get closer to September. There's a group of us, and we cover the Java beat, JUGs, Oracle Technology Network, Oracle Solaris, and lots more. What do you want to hear about? Let us know.A group of us from the office went to see the movie Iron Man 2 (it just debuted in the United States) last week and it reminded us of Java, the Java community, and JavaOne. In all three cases, from many disparate (and sometimes seemingly incompatible) parts and people, something comes together that works, is cool, and helps make a better world. Right now, there are hundreds of little islands of planning, all busy answering questions for JavaOne: What sessions get selected? What goes in the Mason street tent (until a few weeks ago, Will there be a tent on Mason street?), What do the JUGS need? Which Oracle ACEs will be there? Can we do a surf theme at the OTN party? And, somehow, like an Iron Man suit, they all come together and work to make a great event. At least, we hope it will be great. That's for you to decide. Please don't be shy--give us your comments and suggestions. We'll be listening.P.S. You can attend Stark Expo online at Oracle.com/ironman2, where you can train to become a "Master Cloud Operative." I got my MCO certification. I wish I had a card to put in my wallet.

    Read the article

  • Steam freezes at login screen

    - by Snail284069
    I have just installed Steam on Xubuntu, and after it finished installing it went to the login screen, but the screen is frozen, and I cannot press the buttons. When running Steam though the terminal it says: alex@Craptop:~$ steam Running Steam on ubuntu 14.04 32-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "unity-gtk-module" Installing breakpad exception handler for appid(steam)/version(1400690891_client) [0522/174755:WARNING:proxy_service.cc(958)] PAC support disabled because there is no system implementation Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig warning: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 78: saw unknown, expected number Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) [HTTP Remote Control] HTTP server listening on port 35849. Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Process 2764 created /alex-ValveIPCSharedObjects5 Installing breakpad exception handler for appid(steam)/version(1400690891_client) Generating new string page texture 12: 48x256, total string texture memory is 49.15 KB Generating new string page texture 13: 256x256, total string texture memory is 311.30 KB Generating new string page texture 14: 128x256, total string texture memory is 442.37 KB Generating new string page texture 15: 384x256, total string texture memory is 835.58 KB and then the terminal gets stuck too, letting me type into it but not doing anything. I tried reinstalling and restarting the computer, but it still keeps happening.

    Read the article

  • Unable to decide weather to continue or quit and start a new carrier

    - by latif mohammad khan
    I am working in a small company. I have joined here for java developer ,but they told as i am fresher so work as Android developer . Then i asked one of my lecturer about Android developement.then he replied why going for mobile developement which is not standard(as nokia's symbain lost , mobile os changes quickly ) its better to get job as Java Developer. By listening his words i was bit not satisfied with my job and thought of leaving Job and search as java developer. But i dont have much confidence to search a job at that time(because i got job after 1 and half years after i passed out), i have decided to work as android developer(as learning new technology and practice java at home). On the first day they introduced me to team leads and they assigned under him. After few days i came to know that my team lead is having only 1 year experience. He(my team lead) joined here as a fresher and done r&d now is my Team lead. If i ask any doubt to him , he just search in internet and reply's my question (some times he explains wrongly) i correct it by myself by searching in net.In my company they don't use latest technologies,they dont follow any design patterns because they dont know them. They provide me very less pay and more work, i dont bother about pay because i am fresher but i bother about work which is not use(I feel like that because they dont use latest technologies,no design patterns,no proper team lead) What i thought was to learn from the company, Team leads how the project done. But there I feel like, i am wasting my time.If i go for another interview in future they ask latest technologies. Now i dont know what to do weather to quit the job and learn another language which have good demand like sap abap or to continue here. please provide me advice Thanks.

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • Bridged VM guest does not get IPv6 prefix

    - by Arne
    I have a similar problem and setup as described in IPv6 does not work over bridge. My host get a IPv6 prefix but the guest VM only gets a local fe80-prefix. Using tcpdump I can see that solicit messages are going out from the guest but the host (ubuntu-server) doesn't seem to respond: arne@ubuntu-server:/var/log$ sudo tcpdump -i br0 host fe80::5054:00ff:fe4d:9ae0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 65535 bytes 14:31:15.314419 IP6 fe80::5054:ff:fe4d:9ae0 > ff02::16: HBH ICMP6, multicast listener report v2, 4 group record(s), length 88 14:31:15.322337 IP6 fe80::5054:ff:fe4d:9ae0 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 14:31:15.502374 IP6 fe80::5054:ff:fe4d:9ae0 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28 14:31:15.743894 IP6 fe80::5054:ff:fe4d:9ae0.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit 14:31:15.802389 IP6 fe80::5054:ff:fe4d:9ae0 > ff02::16: HBH ICMP6, multicast listener report v2, 4 group record(s), length 88 14:31:17.906580 IP6 fe80::5054:ff:fe4d:9ae0.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit I had a firewall issue which I fixed by adding the following (copied from similar IPv4 before.rules settings) to /etc/ufw/before6.rules at the end before the commit statement: # allow bridging (for KVM) -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT I am running the host on a Ubuntu 14.04 server so I guess I could have used dnsmasq but I didn't find any howto for it so I used radvd (which had to be installed) with the following configuration in /etc/radvd.conf: interface br0 { AdvSendAdvert on; AdvLinkMTU 1480; prefix 2a01:79d:xxx::/64 { AdvOnLink on; AdvAutonomous on; }; }; This didn't help though so I guess I must have configured it wrong? Any help appreciated. Br, Arne PS: I wish the Ubuntu documentation included how to configure virtualization to work with IPv6

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • Android device unable to obtain ip address connecting to AP created with Hostapd

    - by user114392
    I have a wired internet connection that works behind an authenticated proxy server. I followed the steps mentioned here and managed to create a hotspot which my google nexus 7 detects. However, it seems stuck at "obtaining an ip address" and is not able to connect to the internet. I initially received the following error message when running the script in the terminal: dnsmasq: failed to create listening socket for 127.0.0.1: Address already in use [fail] I figured it is because of a conflict with the network manager, I commented out the "dns=dnsmasq" line in the nm configuration file. After a network-manager restart, the first error doesn't show up but I get the following: Configuration file: /etc/hostapd.conf Failed to create interface mon.wlan0: -23 (Too many open files in system) Try to remove and re-create mon.wlan0 In both cases, however, the hotspot is created and is detected by my android device. only that it cannot "obtain an ip address" and connect to it. Is it because my eth0 connects via a proxy server? Or could there be something wrong with the dnsmasq config? Any help would be appreciated.

    Read the article

  • ScreenManagement better practices ?! Textbox not focusing

    - by xykudyax
    I saw a question here using DataTemplates with WPF for ScreenManagement, I was curious and I gave it a try I think the ideia is amazing and very clean. Though I'm new to WPF and I read a lot of times that almost everything should be made in XAML and very little should be "coded behind". My questions resolves about using the datatemplate ideia, WHERE should the code that calls the transitions be? where should I define which commands are avaiable in which screens. For example: [ScreenA] Commands: Pressing B - Goes to state B Pressing ESC - Exits [ScreenB] Commands: Pressing A - Goes to state A Pressing SPACE - Exits where do I define the keyEventHandlers? and where do I call the next screen? I'm doing this as an hobby for learning and "if you are learning, better learn it right" :) Thank you for your time. Yes the Q/A I was talking is: What's a good way to handle game screen management in WPF? What I've done so far was to create a Screen class (derived from UserControl) and create some virtual methods: - one for Initializing stuff (like focus a given component by default) - another for inputHandling I handle it by using a switch case and by listening to the PreviewKeyDown event from the parent container (MainWindow) Im not able to do it another way! Help?!. - and a finally one that removes the keyEvent method (when the screen is terminated) Parent.PreviewKeyDown -= OnKeyDown; am I doing okay? I face a problem. When I add a new screen (userControl) containing a TextBox I'm not able to give it autofocus :/ The Caret is there but is not blinking and I have to hit "TAB" before being able to input anything at all :/

    Read the article

  • IoT? Time for Enterprise Architecture

    - by OTN ArchBeat
    Of course you've been listening to the latest OTN ArchBeat Podcast on the challenges and opportunities in the Internet of Things. If so, you'll also be interested in ZDNet blogger Joe McKendricks' recent post, Will the 'Internet of Things' make CIOs' jobs harder?. In that post McKendrick offers this important bit of advice that will certainly have architects saying "I told you so." Enterprises need to develop architectural approaches to the management of data. Meaning the development of repeatable processes to source, ingest, transform and store information. For years, IT managers simply bought more hardware and addressed data with on-off integration projects. Now it's time for enterprise architecture. IoT is an important new phase in the evolution of enterprise IT. Challenging? You bet! But meeting any such challenge requires big, broad thinking and planning. In that context Enterprise Architecture has always been important. But as IoT gains traction and speed, enterprise architecture should be top of mind for all concerned.

    Read the article

  • Introduction to Extended Events

    - by extended_events
    For those fighting with all the Extended Event terminology, let's step back and have a small overall Introduction to Extended Events. This post will give you a simplified end to end view through some of the elements in Extended Events. Before we start, let’s review the first Extented Events that we are going to use: -          Events: The SQL Server code is populated with event calls that, by default, are disabled. Adding events to a session enables those event calls. Once enabled, they will execute the set of functionality defined by the session. -          Target: This is an Extended Event Object that can be used to log event information. Also it is important to understand the following Extended Event concept: -          Session: Server Object created by the user that defines functionality to be executed every time a set of events happen.   It’s time to write a small “Hello World” using Extended Events. This will help understand the above terms. We will use: -          Event sqlserver. error_reported: This event gets fired every time that an error happens in the server. -          Target package0.asynchronous_file_target: This target stores the event data in disk. -          Session: We will create a session that sends all the error_reported events to the ring buffer. Before we get started, a quick note: Don’t run this script in a production environment. Even though, we are going just going to be raise very low severity user errors, we don't want to introduce noise in our servers. -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data1.xel' , metadatafile = 'c:\temp\data1.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- GENERATES AN ERROR RAISERROR (N'HELLO WORLD', -- Message text.            1, -- Severity,            1, 7, 3, N'abcde'); -- Other parameters GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data1*.xel','c:\temp\data1*.xem', null, null) This query will output the event data with our first hello world in the Extended Event format: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-02-27T03:08:04.210Z"><data name="error"><value>50000</value><text /></data><data name="severity"><value>1</value><text /></data><data name="state"><value>1</value><text /></data><data name="user_defined"><value>true</value><text /></data><data name="message"><value>HELLO WORLD</value><text /></data></event> More on parsing event data in this post: Reading event data 101 Now let's move that lets move on to the other three Extended Event objects: -          Actions. This Extended Objects actions get executed before events are published (stored in buffers to be transferred to the targets). Currently they are used additional data (like the TSQL Statement related to an event, the session, the user) or generate a mini dump.   -          Predicates: Predicates express are logical expressions that specify what predicates to fire (E.g. only listen to errors with a severity greater than 16). This are composed of two Extended Objects: o   Predicate comparators: Defines an operator for a pair of values. Examples: §  Severity > 16 §  error_message = ‘Hello World!!’ o   Predicate sources: These are values that can be also used by the predicates. They are generic data that isn’t usually provided in the event (similar to the actions). §  Sqlserver.username = ‘Tintin’ As logical expressions they can be combined using logical operators (and, or, not).  Note: This pair always has to be first an event field or predicate source and then a value         Let’s do another small Example. We will trigger errors but we will use the ones that have severity >= 10 and the error message != ‘filter’. To verify this we will use the action sql_text that will attach the sql statement to the event data: -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported       (ACTION (sqlserver.sql_text) WHERE severity = 2 and (not (message = 'filter'))) ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data2.xel' , metadatafile = 'c:\temp\data2.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- THIS EVENT WILL BE FILTERED BECAUSE SEVERITY != 2 RAISERROR (N'PUBLISH', 1, 1, 7, 3, N'abcde'); GO -- THIS EVENT WILL BE FILTERED BECAUSE MESSAGE = 'FILTER' RAISERROR (N'FILTER', 2, 1, 7, 3, N'abcde'); GO -- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data2*.xel','c:\temp\data2*.xem', null, null)   This last statement will output one event with the following data: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-03-05T23:15:05.481Z">   <data name="error">     <value>50000</value>     <text />   </data>   <data name="severity">     <value>2</value>     <text />   </data>   <data name="state">     <value>1</value>     <text />   </data>   <data name="user_defined">     <value>true</value>     <text />   </data>   <data name="message">     <value>PUBLISH</value>     <text />   </data>   <action name="sql_text" package="sqlserver">     <value>-- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); </value>     <text />   </action> </event> If you see more events, check if you have deleted previous event files. If so, please run   -- Deletes previous event files EXEC SP_CONFIGURE GO EXEC SP_CONFIGURE 'xp_cmdshell', 1 GO RECONFIGURE GO XP_CMDSHELL 'del c:\temp\data*.xe*' GO   or delete them manually.   More Info on Events: Extended Event Events More Info on Targets: Extended Event Targets More Info on Sessions: Extended Event Sessions More Info on Actions: Extended Event Actions More Info on Predicates: Extended Event Predicates Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How do I install FreeNX server so that it works correctly?

    - by Niklas
    I've tried every single possible way of installing the server now, I've read every how to available and I still can't get it to work. Please let me know in which step I do wrong. I'm using ubuntu 10.10. I will mainly be referring to the following how-to, but also this, and this one. First I add the ppa Install Freenx Download the special Freenx package as stated in the howto, fix ownership - install it Create custom SSH key Copy the file /var/lib/nxserver/home/.ssh/client.id_dsa.key to the client and import it in nomachine (windows 7 x64) Check that both the user I will be logging in with and the user nx is in AllowedUsers in the /etc/ssh/sshd_config file Check the port that ssh is listening on Login through nomachine with my regular user account in ubuntu I always receive the message "authentication failed for [user]" when I try to log in. And I can't see the user "nx" which is said to be created during installation when I look under System-Administration-"Users & Groups". Can anyone please enlighten me if there is any step that I miss or have misunderstood? Thank you very much! (Or is there an easier way of enabling remote desktop that it can be used with a windows machine? I prefer not using VNC because I was hoping of being able to get better performance than that. And when I tried using XRDP I only received a black screen on the client.)

    Read the article

  • Certifications in the new Certify

    - by richard.miller
    The most up-to-date certifications are now available in Certify - New Additions Feb 2011! What's not yet available can still be found in Classic Certify. We think that the new search will save you a ton of time and energy, so try it out and let us know. NOTE: Not all cert information is in the new system. If you type in a product name and do not find it, send us feedback so we can find the team to add it!. Also, we have been listening to every feedback message coming in. We have plans to make some improvements based on your feedback AND add the missing data. Thanks for your help! Japanese ??? Note: Oracle Fusion Middleware certifications are available via oracle.com: Fusion Middleware Certifications. Certifications viewable in the new Certify Search Oracle Database Oracle Database Options Oracle Database Clients (they apply to both 32-bit and 64-bit) Oracle Enterprise Manager Oracle Beehive Oracle Collaboration Suite Oracle E-Business Suite, Now with Release 11i & 12! Oracle Siebel Customer Relationship Management (CRM) Oracle Governance, Risk, and Compliance Management Oracle Financial Services Oracle Healthcare Oracle Life Sciences Oracle Enterprise Taxation Management Oracle Retail Oracle Utilities Oracle Cross Applications Oracle Primavera Oracle Agile Oracle Transportation Management (G-L) Oracle Value Chain Planning Oracle JD Edwards EnterpriseOne (NEW! Jan 2011) 8.9+ and SP23+ Oracle JD Edwards World (A7.3, A8.1, A9.1, and A9.2) Certifications viewable in Classic Certify Classic certify is the "old" user interface. Clicking the "Classic Certify" link from Certifications QuickLinks will take you there. Enterprise PeopleTools Release 8.49, Release 8.50, and Release 8.51 Other Resources See the Tips and Tricks for the new Certify. Watch the 4 minute introduction to the new certify. Or how to get the most out of certify with a advanced searching and features demo with the new certify. =0)document.write(unescape('%3C')+'\!-'+'-') //--

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • CRM Evolution 2014: Mediocrity is the New Horrible in Customer Service

    - by Tuula Fai
    "Mediocrity is the new horrible in customer service," Blair McHaney, Gold's Gym Almost everyone knows that customers' expectations have risen. But, after listening to two days of presentations at CRM Evolution, I think it’s more accurate to say that customers' expectations have skyrocketed. Fortunately, most companies have gotten the message and are taking their customer service to a higher level. For those who've been hesitant to 'boldly go where their customer service organization has not gone before,' take heart. I’ve got some statistics that will encourage you to take those first few steps. Why should I change? By engaging customers online, ancestry.com achieved a 99.5% customer satisfaction score (CSAT) while improving retention and saving millions on greater efficiency, including a 38%-50% drop in inbound calls and emails.1 By empowering employees to delight customers, Gold’s Gym achieved a 77.5% Net Promoter Score (NPS) and 22% customer churn rate. No small feat when you consider the industry averages are 40% NPS and 45% churn.2 By adapting quickly to social media, brands like Verizon have benefited from social community members spending 2.5x-10x more than average customers.3 ‘The fierce urgency of now’ is upon us in customer service. You can take your customer service to a higher level! To find out more, click here CRM Evolution Customer Service Experience Footnotes: 1. Arvindh Balakrishnan, Is Your Customer Service Modern?2. Blair McHaney, Wire Your Organization with Customer Feedback3. Becky Carroll, The Power of Communities for Improving the Service Experience and Building Advocates

    Read the article

  • CodePlex Daily Summary for Wednesday, May 21, 2014

    CodePlex Daily Summary for Wednesday, May 21, 2014Popular ReleasesSpotify Plugin for Jamcast: Spotify Plugin for Jamcast v2.0: This release works with Jamcast 2.0 API. Implements search. Requires libspotifydotnet 4.0.0.0 and libspotify 12.1.51libspotify.NET - a managed interop library for libspotify: libspotify.NET v4.0 (x86): Bugfixes, API changes.SharpLightReporting: SharpLightReporting 1.0.2: Bug Fix: Picture tag is now able to get the data from the property. NullReferenceException is not being thrown now. Added: addtocolpos and addtorowpos attributes to chat tagWindows Embedded Board Support Package for BeagleBone: WEC7 BeagleBone Black 01.05.00: Demo image. Runs on BeagleBone White and Black. On BeagleBone Black works with uSD or eMMC based images SDK and demo tests included Now with Silverlight and OpenGL (PoweVR) support! Built and maintained in the U.S.A.!MISAO: Ver. 5.4: Fix bugs (Nicovideo viwer add-in) Add Masakari option (Nicovideo viwer add-in)SubVersionOne: SubVersionOne 1.3: Removed reference to old V1 SDK and changed all queries to use REST API. Added status and scopes to the workitem view.VisioAutomation: Visio PowerShell Module (VisioPS) 1.1.26: DocumentationDocumentation is here http://sdrv.ms/11AWkp7 Screencasthttp://vimeo.com/61329170 FilesFor easy installation, download and run the MSI file. If you want to manually install, a ZIP file is provided. ChangeLog *Version 1.1.25 Changed Module name to Visio. So to import the module use Import-Module Visio Replaced Invoke-VisioDraw with Out-Visio *Version 1.1.25 fixed Get-VisioCustomProperties *Version 1.1.23 Fixed a logging bug - was dividing by zero when operations were done ...Extended T-SQL Collector: 1.1.5: Modified to allow collecting system collection sets where the "Generic T-SQL Query collector type" is used. Pick the bitness of your SQL Server installation. If you have a 32 bit SQL Server instance installed on a 64 bit Windows Server, use the x86 setup kit. Both setup kits install the same exact code, but the target directory is different on x64 machines ("Program Files" for x64 and "Program Files (x86)" for x86).EdiFabric: Release 3.1: Fixed parse tree generation for the latest validation schemasCompare .NET Objects: Version 2.03.0.0: Support for System.Drawing.Font type New Option to Ignore Unknown Object TypesQuickMon: Version 3.11: This release adds some major changes to the core monitoring engine. 1. Polling overrides: Each collector entry can specify a minimum time updating is allowed for it and dependent collector entries. 2. Polling frequency sliding: Additional to polling overrides a collector entry can specify 'sliding' polling frequency if the state remains the same. This means the frequency slows down reducing overhead of polling on a stagnant resource. 3. The monitor pack has an overriding frequency. If used...Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.WordMat: WordMat v. 1.06: Check WordMat.blogspot.com for a complete description of new features.Wsus Package Publisher: Release v1.3.1405.17: Add Russian translation (thanks to VSharmanov) Fix a bug that make WPP to crash if the user click on "Connect/Reload" while the Report Tab is loading. Enhance the way WPP store the password for remote computers command.MoreTerra (Terraria World Viewer): More Terra 1.12.9: =========== = Compatibility = =========== Updated to account for new format 1.2.4.1 =========== = Issues = =========== all items have not been added. Some colors for new tiles may be off. I wanted to get this out so people have a usable program.LINQ to Twitter: LINQ to Twitter v3.0.3: Supports .NET 4.5x, Windows Phone 8.x, Windows 8.x, Windows Azure, Xamarin.Android, and Xamarin.iOS. New features include Status/Lookup, Mute APIs, and bug fixes. 100% Twitter API v1.1 coverage, Async, Portable Class Library (PCL).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.26.0: Added access to the Release Notes during 'Check for Updates...'' Debug panels Added support for generic types members Members are grouped into 'Raw View' and 'Non-Public members' categories Implemented dedicated (array-like) view for Lists and Dictionaries http://download-codeplex.sec.s-msft.com/Download?ProjectName=csscriptnpp&DownloadId=846498ClosedXML - The easy way to OpenXML: ClosedXML 0.70.0: A lot of fixes. See history.SFDL.NET: SFDL.NET (2.2.9.2): Changelog: Neues Icon Xup.in CnL Plugin BugfixSEToolbox: SEToolbox 01.030.008 Release 1: Fixed cube editor failing to apply color to cubes. Added to cube editor, replace cube dialog, and Build Percent dialog. Corrected for hidden asteroid ore, allowing rare ore to show when importing an asteroid, or converting a 3d model to an asteroid (still appears to be limitations on rare ore in small asteroids). Allowed ore selection to Asteroid file import. (Can copy/import and convert existing asteroid to another ore). Added progress bars to common long running operations. Fixed ...New Projects.Net Flexport: Library for importing and exporting data from and to csv or text files using attributes to describe the export format.Chess Platform: Simple Chess Platform with basic players and GUI.DMEditor: DMEditor is a "kit librarian" for the Alesis DM10 electronic drum module, allowing you to save and load drum kits, individual instruments, or even full modules.Endomondo Export: Endomondo GPX TCX activity exporter / downloaderHunt for Red October HTW: Hunting the Red October (Wumpus) at Microsoft 2014JuegoIngenio: ALTO JUEGOOrchard Liquid Markup: Orchard module for adding support for templates written in Liquid Markup (http://liquidmarkup.org/). Uses DotLiquid (http://dotliquidmarkup.org/).Panorama: No summaryPunto de Venta TCU: Sistema Punto de VentaString.Format Diagnostic (Roslyn): The aim of the project is to enable "Roslyn" diagnostics for the validation of the formatstring supplied to String.Format.System Layer: Operating System abstraction layer which will enable developers to create applications and device drivers which run on virtually any platform.XiaoQingDao: XiaoQingDao????????-????????【??】????????????: ??????????????,????,??????????? ???? ???? ?????????,???,??,?????! ??????-??????【??】??????????: ?????????? ???? ???? ???? ???? ???? ??????? ?????,????????????????. ??????-??????【??】??????????: ???????????????:?????? ???? ??????,???????,??????,???????。 ??????-??????【??】??????????: ????????,??????:?????,?????,??????,??????????,????????。????????! ??????-??????【??】??????????: ??????????????????、????,??100%????,??????,????????????,???????????! ??????-??????【??】??????????: ????????????????????,??????????,????????、????,??????????,??????????。 ???????-???????【??】???????????: ???????????????????、????、??????、????????,????????????,???????????! ???????-???????【??】???????????: ????????????????、????、????、??????、????、???????,?????,?????????! ???????-???????【??】???????????: ????????????????????????????,???????????????????????,???????。 ??????-??????【??】??????????: ??????,??,????????。 ... ??????????????????、??????????????????... ??????-??????【??】??????????: ?????????????,??,??,??,??? ?,??,,??,??,??,??,??,??,????????,??????! ??????-??????【??】??????????: ????????????????????,?????,???????,???????????,??????! ??????-??????【??】??????????: ???????????,?????????????? ??。????????、????、????、?????????? ???????。 ??????-??????【??】??????????: ???????????????????????:????、????、??????????????,????????。????????! ??????-??????【??】??????????: ????????????????????????,??????????,????,????,?????????、??????,??????。 ??????-??????【??】??????????: ???????????、????、????、??????????,???,?????,???????????????. ??????-??????【??】??????????: ???????????????,????,???????、???????????,???????????,????,?????,???????。 ??????-??????【??】??????????: ????????????????,?????????、??、??、????,??????????,?????????????! ????: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。??????-??????【??】??????????: ?????????????????,???????????????。?????????????,???????,?????????。 ??????-??????【??】??????????: ??????????????????,??:??????,????,????,????,?????,??????????????. ??????-??????【??】??????????: ????????????????????,?????????????,???????????.????????????,????????????! ??????-??????【??】??????????: ??????,?????????,?????????????。?????????????,?????????,???????。 ??????-??????【??】??????????: ??????????????:????,????,????,???????,????????,??????:????????,?????! ??????-??????【??】??????????: ??????????????????????????,???????????????,????????????????! ??????-??????【??】??????????: ???????????????????,????,????,????,???????,?????,?????.??????。 ??????-??????【??】??????????: ??????????,????????,?????,???,???????????,???????????,?????,??????! ??????-??????【??】??????????: ????????????????????????????:???????,??????,????,????,????,?????! ??????-??????【??】??????????: ??????????????????,???????、????、????、??????、???????,??????,???????????。 ??????-??????【??】??????????: ????????????????,???????、???????????,????????,????,?????????,??????,??????! ??????-??????【??】??????????: ?????????????????,????????????,?????????????????,??????,????????! ??????-??????【??】??????????: ??????????????,?????????????,????,?????????,?????????????,?????,?????! ??????-??????【??】??????????: ?????????????????????,????,????,??????????。???????????????,??,??,??????????,??????... ??????-??????【??】??????????: ?????????????????,???????????????。???????????,??????:????、????、???????! ???????-???????【??】???????????: ?????????????????????,???、???!???????,????????????????,????????????,???! ???????-???????【??】???????????: ????????????、?????、?????、?????、?????、????,???????????,?????,??????! ???????-???????【??】???????????: ?????????????????,?????????????? ??。??????????、????、????、?????????? ???????。 ??????-??????【??】??????????: ??????????,??????????????????????,???????????????,?????????????! ??????-??????【??】??????????: ??????????????????????,?????, ... ????????????,????,????,?????,???????。 ??????-??????【??】??????????: ????????????????,?????????/?,,???????????,??????????????! ???? (?????): ???: ????(?????)??????-??????【??】??????????: ?????????????、?????、?????、????、?????,??????????。????????????????! ??????-??????【??】??????????: ??????????????????,???????????,??????????????,??????????,??????????????! ??????-??????【??】??????????: ??????????????????,????????????,?????、??、????,?????,??????! ??????-??????【??】??????????: ??????????????????????,???????????????,????????????????????! ??????-??????【??】??????????: ????????????,??????????、??????,??????????、????、????、???????。 ??????-??????【??】??????????: ??????????????????????,???????????????,???????,?????,?????,????? !!! ??????-??????【??】??????????: ?????????????????,????:????,????,????,??????,?????,???????????????! ??????-??????【??】??????????: ?????????????????"????,????"???,????????????????????????,??????????????。 ??????-??????【??】??????????: ???????????????????????????、??????????????,??????????????。 ??????-??????【??】??????????: ????????????????????????、??????,????、?????、????, ?????????,?????????????! ??????-??????【??】??????????: ????????,??????,?????????????????????,???????????????????????。 ??????-??????【??】??????????: ?????????????????????、????、????、??????、???????,??????、??????。 ??——?????: None

    Read the article

  • Why doesn't lsof show localhost TCP connections in 14.04?

    - by sfussenegger
    I have two servers running 12.04.4 and 14.04.1 respectively. Both have nginx (port 80) and a Java process (port 8080). As expected, the lsof output for the Java process on the 12.04 machine shows a couple of established connections for port 8080 (e.g. TCP 127.0.0.1:8080->127.0.0.1:58067 (ESTABLISHED)) The 14.04 machine however does not. It only shows the listening port (TCP *:8080 (LISTEN)). I'm sure there are active connections though (confirmed by access logs, Java process status output, etc). What has changed from 12.04 to result in this behavior? Can this change be the cause for the "Too many open files" errors I'm getting since moving from 12.04 to 14.04? 12.04: $ dpkg -l lsof linux-image-virtual openjdk-7-jre nginx ||/ Name Version +++-===========================================-=========================================== ii linux-image-virtual 3.2.0.59.70 ii lsof 4.81.dfsg.1-1build1 ii nginx 1.6.1-1~precise ii openjdk-7-jre 7u65-2.5.1-4ubuntu1~0.12.04.1 14.04: $ dpkg -l lsof linux-image-virtual openjdk-7-jre nginx-full ||/ Name Version Architecture Description +++-=====================================-=======================-======================= ii linux-image-virtual 3.13.0.32.38 amd64 ii lsof 4.86+dfsg-1ubuntu2 amd64 ii nginx-full 1.4.6-1ubuntu3 amd64 ii openjdk-7-jre:amd64 7u65-2.5.1-4ubuntu1~0.1 amd64

    Read the article

  • Allow JMX connection on JVM 1.6.x

    - by Martin Müller
    While trying to monitor a JVM on a remote system using visualvm the activation of JMX gave me some challenges. Dr Google and my employers documentation quickly revealed some -D opts needed for JMX, but strangely it only worked for a Solaris 10 system (my setup: MacOS laptop monitoring SPARC Solaris based JVMs) On S11 with the same opts I saw that "my" JVM listening on port 3000 (which I chose for JMX), but visualvm was not able to get a connection. Finally I found out that at least my S11 installation needed an explicit setting of the RMI host name. This what finally worked:         -Dcom.sun.management.jmxremote=true \        -Dcom.sun.management.jmxremote.ssl=false \        -Dcom.sun.management.jmxremote.authenticate=false \        -Dcom.sun.management.jmxremote.port=3000 \        -Djava.rmi.server.hostname=s11name.us.oracle.com \ Maybe this post saves someone else the time I spent on research 

    Read the article

  • Certifications in the new Certify - March 2011 Update

    - by richard.miller
    The most up-to-date certifications are now available in Certify - New Additions March 2011! What's not yet available can still be found in Classic Certify. We think that the new search will save you a ton of time and energy, so try it out and let us know. NOTE: Not all cert information is in the new system. If you type in a product name and do not find it, send us feedback so we can find the team to add it!.Also, we have been listening to every feedback message coming in. We have plans to make some improvements based on your feedback AND add the missing data. Thanks for your help!Japanese ???Note: Oracle Fusion Middleware certifications are available via oracle.com: Fusion Middleware Certifications.Certifications viewable in the new Certify SearchEnterprise PeopleTools Release 8.50, and Release 8.51 Added March 2011!Oracle DatabaseOracle Database OptionsOracle Database Clients (they apply to both 32-bit and 64-bit)Oracle BeehiveOracle Collaboration SuiteOracle E-Business Suite, Now with Release 11i & 12!Oracle Siebel Customer Relationship Management (CRM)Oracle Governance, Risk, and Compliance ManagementOracle Financial ServicesOracle HealthcareOracle Life SciencesOracle Enterprise Taxation ManagementOracle RetailOracle UtilitiesOracle Cross ApplicationsOracle PrimaveraOracle AgileOracle Transportation Management (G-L)Oracle Value Chain PlanningOracle JD Edwards EnterpriseOne (NEW! Jan 2011) 8.9+ and SP23+Oracle JD Edwards World (A7.3, A8.1, A9.1, and A9.2)Certifications viewable in Classic CertifyClassic certify is the "old" user interface. Clicking the "Classic Certify" link from Certifications > QuickLinks will take you there.Enterprise PeopleTools Release 8.49 (Coming Soon)Enterprise ManagerOther ResourcesSee the Tips and Tricks for the new Certify.Watch the 4 minute introduction to the new certify.Or how to get the most out of certify with a advanced searching and features demo with the new certify.

    Read the article

  • Removing specific part of filename (what's after the second dash) for all files in a folder

    - by Bodo
    I use the command line utility youtube-dl to download videos from YouTube and make mp3s from them with avconv. I'm doing this under Ubuntu 14.04 and very happy with it. The utility downloads the files and saves them with the following name scheme: TITLE(artist-track)-ID.mp3 So an actual filename looks like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Some other file names in the folder look like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 Stromae - Papaoutai-oiKj0Z_Xnjc.mp3 At first, this was no problem. It didn't bother me while listening to my music in Rhytmbox. But when moving to phone or other devices it is pretty confusing to see a so long name, and some players, like the Samsung ones, treat that last part (id after second dash) of the name as Album or something. I'd like to create a bash script that removes what's after the second dash in the name for all files, so it'll make them like this: From: Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 To: Martin Garrix - Animals (Official Video).mp3 Is it also possible to instruct youtube-dl to exclude the ID from now on? I am currently downloading with the command: youtube-dl --extract-audio --audio-quality 0 --audio-format mp3 URL

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >