Search Results

Search found 28784 results on 1152 pages for 'start'.

Page 629/1152 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Join us for our WebLogic Communtiy webcast on November 2nd 2012! OOW update WebLogic & ExaLogic

    - by JuergenKress
    NOVEMBER 2nd, 2012 AT 11:00 -11:45 AM CET (BERLIN TIME) Do you want to learn the latest about WebLogic and ExaLogic? Would you like to know what did Oracle announce at Oracle OpenWorld 2012? Join us for our WebLogic Communtiy webcast on November 2nd 2012! Don't miss this unique opportunity and learn the latest announcement and product updates on WebLogic and ExaLogic portfolio. Agenda Update announced about Cloud Application Foundation ExaLogic Update Key examples from successful customer Updated Roadmap Q&A Register to Attend! To Join the Webcast (Employees and Partners) AND / OR DIAL IN If you would like to dial in to the audio portion of the Webconference. Call ID: 8000524 & Call Passcode: 333111 Austria : +43(0)19286512 Belgium: +32(0)24010528 Denmark: +4532729222 Finland: +358(0)923193923 France: +33(0)176728936 Germany: +49(0)69222216106 Ireland: +353(0)12475650 Italy: +39(0)236008198 Netherlands: +31(0)207143543 Norway: +4721033443 Spain: +34914143755 Sweden: +46(0)856619465 Switzerland: +41(0)445804003 United Kingdom: +44(0)2081181001 Please click here to find more Local Numbers. Presenters: Maciej Gruszka Senior Principal Product Manager Tel: +4 (0) 8601 156 464 E-Mail: [email protected] LinkedIn: http://pl.linkedin.com/pub/maciej-gruszka/2/169/89 Jürgen Kress WebLogic Partner Adoption EMEA Tel. +49 89 1430 1479 E-Mail: [email protected] LinkedIn: http://de.linkedin.com/in/kress Twitter: http://twitter.com/wlscommunity Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Looking forward to your participation! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic Community Webcast,webcst,OOW updates WebLogic ExaLogic,WebLogic ppt,presenation,sales,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Phpmyadmin Not Working

    - by glenbl54
    I recently installed phpmyadmin onto ubuntu server 10.04 using sudo apt-get install phpmyadmin The installation went fine and everything was working including phpmyadmin. I then performed a restart of the server and now apache2 starts up but when I navigate to http://192.168.1.72/phpmyadmin/ I am getting a 403 error. I have included /etc/phpmyadmin/apache.conf file in /etc/apahe2/apache2.conf file /etc/phpmyadmin/apache.conf # phpMyAdmin default Apache configuration Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> Options FollowSymLinks DirectoryIndex index.php <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_value include_path . </IfModule> </Directory> # Authorize for setup <Directory /usr/share/phpmyadmin/setup> <IfModule mod_authn_file.c> AuthType Basic AuthName "phpMyAdmin Setup" AuthUserFile /etc/phpmyadmin/htpasswd.setup </IfModule> Require valid-user </Directory> #Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> The only change that was made since phpmyadmin was installed was that timetrex was installed. Is there anyway to manually start phpmyadmin or should it already be working once apache started?

    Read the article

  • build a Database from Ms Word list information...

    - by Jayron Soares
    Please someone can advise me how to approach a given problem: I have a sequential list of metadata in a document in MS Word. The basic idea is create a python algorithm to iterate over of the information, retrieving just the name of PROCESS, when is made a queue, from a database. for example. Process: Process Walker (1965) Exact reference: Walker Process Equipment., nc. v. Food Machinery Corp.. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari To The United States Court of Appeals for the SeventhCircuit. Parties: Walker Process Equipment, Inc. Sector: Systems is … Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patenton "knee ction swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond .. Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of creating an algorithm in python to be able to break this information sequenced and try to store them in a web database[open source application that I’m looking for] in order to allow for free consultations ...

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • Newbie, deciding Python or Erlang

    - by Joe
    Hi Guys, I'm a Administrator (unix, Linux and some windows apps such as Exchange) by experience and have never worked on any programming language besides C# and scripting on Bash and lately on powershell. I'm starting out as a service provider and using multiple network/server monitoring tools based on open source (nagios, opennms etc) in order to monitor them. At this moment, being inspired by a design that I came up with, to do more than what is available with the open source at this time, I would like to start programming and test some of these ideas. The requirement is that a server software that captures a stream of data and store them in a database(CouchDB or MongoDB preferably) and the client side (agent installed on a server) would be sending this stream of data on a schedule of every 10 minutes or so. For these two core ideas, I have been reading about Python and Erlang besides ruby. I do plan to use either Amazon or Rackspace where the server platform would run. This gives me the scalability needed when we have more customers with many servers. For that reason alone, I thought Erlang was a better fit(I could be totally wrong, new to this game) and I understand that Erlang has limited support in some ways compared to Ruby or Python. But also I'm totally new to the programming realm of things and any advise would be appreciated grately. Jo

    Read the article

  • Sam Abraham to Speak about MVC2 at the Florida.Net Miramar .Net User Group on July 13 2010

    - by Sam Abraham
    I am scheduled to give a presentation at the Miramar .Net User Group on July 13, 2010 about MVC and the new features in MVC2. This will be similar yet will have more advanced content since the group had already had a introduction to MVC in a previous meeting. Here is the topic and speaker bio: Sam Abraham To Speak At The LI .Net User Group on June 3rd, 2010 As you might know, I lived and worked on LI, NY for 11 years before relocating to South Florida. As I will be visiting my family who still live there in the first week of June, I couldn't resist reaching out to Dan Galvez, LI  .Net User Group Leader, and asking if he needed a speaker for June's meeting. Apparently the stars were lined up right and I am now scheduled to speak at my "home" group on June 3rd, which I am pretty excited about. Here is a brief abstract of my talk and speaker bio. What's New in MVC2 We will start by briefly reviewing the basics of the Microsoft MVC Framework. Next, we will look at the new features introduced in the latest and greatest MVC2. Many new enhancements were introduced to both the MS MVC Framework and to VS2010 to improve developers' experience and reduce development time. We will be talking about new MVC2 features such as: Model Validation, Areas and Template Helpers. We will also discuss the new built-in MVC project templates that ship with VS2010. About the Speaker Sam Abraham is a Microsoft Certified Professional (MCP) and Microsoft Certified Technology Specialist (MCTS ASP.Net 3.5) He currently lives in South Florida where he leads the West Palm Beach .Net User Group (www.fladotnet.com) and actively participates in various local .Net Community events as organizer and/or technical speaker. Sam is also an active committee member on various initiatives at the South Florida Chapter of the Project Management Institute (www.southfloridapmi.org). Sam finds his passion in leveraging latest and greatest .Net Technologies along with proven Project Management practices and methodologies to produce high quality, cost-competitive software.  Sam can be reached through his blog: http://www.geekswithblogs.net/wildturtle

    Read the article

  • Is functional intellisense and code browsing more beneficial than the use of dependency injection containers

    - by Gavin Howden
    This question is really based on PHP, but could be valid for other dynamically typed, interpreted languages and specifically the methods of generating code insight and object browsing in development environments. We use PHPStorm, and find intellisense invaluable, but it is provided by some limited static analysis and parsing of doc comments. Obviously this does not lend well to obtaining dependencies through a container, as the IDE has no idea of the type returned, so the developer loses out on a plethora of (in the case of our framework anyway) rich documentation provided through the doc comments. So we start to see stuff like this: $widget = $dic->YieldInstance('WidgetA', $arg1, $arg2, $arg3, $arg4...)); /** * @var $widget WidgetA */ So that code insight works. In effect the comments are tightly bound, but worse they come out of sync when code is modified but not the comments: $widget = $dic->YieldInstance('WidgetB', $arg1, $arg2, $arg3, $arg4...)); /** * @var $widget WidgetA */ Obviously the comment could be improved by referencing a Widget interface, but then we might as well use a factory and avoid the requirement for the extra typing hints in the comments, and dic complexity / boiler plating. Which is more important to the average developer, code insight / intellisense or 'nirvana' decouplement?

    Read the article

  • Trust: A New Line of Business

    - by ruth.donohue
    What do you think are the key factors in building and maintaining your company's reputation... Innovation? Price? Surprisingly, according to the recent 2010 Edelman Trust Barometer, survey respondents in the US valued transparency of business practices as well as company trustworthiness as the two most important factors influencing corporate reputation. What is trust? It's the confidence in a company's ability to do what is right for all its stakeholders -- shareholders, customers, employees, and the broader society at large -- and not just shareholders. Trust is an increasingly important component to maintaining your company's reputation and brand, and Western countries have seen an increase in global trust. Global businesses headquartered in the United States in particular have seen an 18 point boost to 54 percent in global trust. Whether this uptick represents the start of a new trend or a mere blip in the barometer remains to be seen. The Edelman report notes that the increase is "tenuous" as people expect companies to return to "business as usual" after the economy rebounds. This warning underscores the need for companies to continue engaging in open and frequent communications and business practices with its stakeholders across multiple channels and view trust as a "new line of business" to cultivate.

    Read the article

  • What are the best ways to cope with «one of those days»? [closed]

    - by Júlio Santos
    I work in a fast-paced startup and am absolutely in love with what I do. Still, I wake up to a bad mood as often as the next guy. I find that forcing myself to play out my day as usual doesn't help — in fact, it only makes it worse, possibly ruining my productivity for the rest of the week. There are several ways I can cope with this, for instance: dropping the current task for the day and getting that awesome but low-priority feature in place; doing some pending research for future development (i.e. digging up ruby gems); spending the day reading and educating myself; just taking the day off. The first three items are productive in themselves, and taking the day off recharges my coding mana for the rest of the week. Being a young developer, I'm pretty sure there's a multitude of alternatives that I haven't come across yet. How can programmers cope with off days? Edit: I am looking for answers related specifically to this profession. I therefore believe that coping with off days in our field is fundamentally different that doing so in other areas. Programmers (especially in a start-up) are a unique breed in this context in the sense that they tend to have a multitude of tasks at hand on any given moment, so they can easily switch between these without wreaking too much havoc. Programmers also tend to work based on clear, concise objectives — provided they are well managed either by themselves or a third party — and hence have a great deal of flexibility when it comes to managing their time. Finally, our line of work creates the opportunity — necessity, if you will — to fit a plethora of tasks not directly related to the current one, such as research and staying on top of new releases and software updates.

    Read the article

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

  • Enjoy Playing Dozens of Classic Atari, Adventure, and Other Types of Games Directly in Your Browser

    - by Akemi Iwaya
    Would you love to play classic Atari games, journey once again with Bilbo Baggins in The Hobbit v1.0, or even try out WordStar 2.26? Then we have the perfect way to indulge in hours of browser-based fun to share with you. The Internet Archive has worked hard to put together a JavaScript port of the MESS computer software emulator and create an awesome online Historical Software Collection of classic games and software from yesteryear! When you visit the homepage, you will be able to scroll down through it for a ‘guided tour’ of the games and software currently available in the initial collection. This is what the individual homepages for each game or bit of software looks like. Keep in mind that none of the ‘Download item’ links we checked were working for us even though they are ‘shown’… Browse on over to the Internet Archive’s Historical Software Collection homepage to start having fun with all the classic games and programs. Historical Software Collection Homepage If you would like to visit the homepage for The Hobbit v1.0 directly, then use the link below. Play The Hobbit v1.0 [via The Verge]     

    Read the article

  • Performing a silent install of JDeveloper

    - by draikes
    Installing JDeveloper Now that you have downloaded the latest version of JDeveloper from: the Oracle technology network, you are almost ready to install it. The problem is that the GUI installer is not as accessible as it could be. However, there is an alternative called a silent install. To perform a silent install, follow these steps: Download the silent.xml file into the same folder as the JDeveloper installer. You can customize the silent.xml file by setting the folder where JDeveloper will be installed, and by setting the location where you have a 1.6 jdk installed with the accessbridge already configured. The defaults are: JDeveloper wil lbe installed at c:\jdev a jdk is installed at c:\jdk\1.6.0_25 (see instructions in the top of the silent.xml file). Open a command window and navigate to the folder where the JDeveloper installer and silent.xml files are located. Run the following command: jdevstudio11120install.exe -mode=silent -silent_xml=silent.xml -log=install.log Note: this assumes that you are installing JDeveloper 11.1.2.0.0. Change the above command to match the installer package you have. This command will start by extracting the archive then the oracle installer will launch, but you just have to wait until the command prompt returns and voila it will be installed. To run JDeveloper: Now you can use windows explorer to navigate to the %JDEV_HOME% as specified in the silent.xml file (c:\jdev unless you changed it)and drill down to: jdeveloper\jdev\bin and now you have a couple of choices. If you have a 32-bit jdk configured with the accessbridge, then run jdevw.exe, however, if you have a 64-bit jdk copnfigured with the accessbridge, you should run jdev64w.exe. For instructiosn on setting up Accessbridge 2.0.1, see my earlier post. Disclaimer: As always if something doesn't quite go as planned, and you have a problem, please feel free to contact me via email at: don dot raikes at oracle dot com.

    Read the article

  • Need a MP3 ID3 tagger, and cover fetcher

    - by Kaustubh P
    I need to tag my MP3 library, and have tried kid3 (which was manual tagging), when I used Kubuntu 9.10 (I now use Ubunutu Meerkat) Here are the features I am hoping for: A good and clean UI. Tagging should be automatic, like Winamp's autotag feature, which rocks, btw! It should also embed the cover-art in the mp3, not copy a jpeg file in the folder, because now-a-days all players support displaying cover art. But acceptable if not possible. Rename the files as per some regular expression like %TrackNo - %Artist - %Title. Should be accurate, and more importantly smart. I want to start tagging at night, and hopefully my collection should be done by the morning, w/o it being stuck at a user prompt at 1%. If one app cant do all, I am willing to use 3, wouldn't mind exposure to a few more apps ;) I have used picard or someting, and I didnt like it quite a lot. But wouldn't mind using it, if there is no other alternative. Thanks for your time!

    Read the article

  • Open Multiple Sites Without Reopening the Menus in Firefox

    - by Asian Angel
    Are you frustrated with having to reopen your menus for each website that you need or want to view? Now you can keep those menus open while opening multiple websites with the Stay-Open Menu extension for Firefox. Stay-Open Menu in Action You can start using the extension as soon as you have installed it…simply access your favorite links in the “Bookmarks Menu, Bookmarks Toolbar, Awesome Bar, or History Menu” and middle click on the appropriate entries. Here you can see our browser opening the Productive Geek website and that the “Bookmarks Menu” is still open. As soon as you left click on a link or click outside the menus they will close normally like before. Note: Middle clicked links open in new tabs. The only time during our tests that a newly opened link “remained in the background” was for any links opened from the “Awesome Bar”. But as soon as the “Awesome Bar” was closed the new tabs automatically focused to the front. A link being opened from the “History Menu”…still open while the webpage is loading. Options The options are simple to sort through…enable or disable the additional “stay open” functions and enable automatic menu closing if desired. Conclusion If you get frustrated with having to reopen menus to access multiple webpages at one time then you might want to give this extension a try. Links Download the Stay-Open Menu extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Use Multiple Rows of TabsDisable Web Site Window Resizing in FirefoxQuick Hits: 11 Firefox Tab How-TosPrevent Annoying Websites From Messing With the Right-Click Menu in FirefoxJatecblog Moves to How-To Geek Blogs (Linux Readers Should Subscribe) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • VNC client not working -- Not able to see the changes happening on the other side

    - by javanoob
    Here is the Problem: I have two computers connected in the same LAN. I am trying to access one computer from another using Remote Desktop Viewer, I am able to see the remote desktop. But when i click on any thing or perform any action, I dont see the result but the action is performed on the remote desktop..But it is not refreshed on the remote desktop screen.. For Ex: 1) Opened Remote Desktop viewer 2) Connected to the other computer which has yahoo home page opened 3) Clicked on the close button of the web page 4) Action is performed on the other computer (Yahoo page is closed). 5) On Remote Desktop screen i still see the Yahoo home page 6) Whatever action i perform on remote desktop screen i see the same screen(In this case yahoo home page) Bottom line: Whatever screen i see on the start up of Remote Desktop viewer that is not getting refreshed. So the thing is though i am able to perform actions on remote desktop, the screen is not refreshing.. How do i solve this? I hope i made my point clear.. NOTE: I am connecting to Ubuntu 9.04 machine from Ubuntu 10.04LTS.. I am really not sure if that makes any difference.

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • How to install mod_ssl for Apache

    - by Nick Foote
    Ok So I installed Apache httpd a while ago and have recently come back to it to try setup SSL and get it serving several different tomcat servers. At the moment I have two completely separate tomcat instances serving up to slightly different versions (one for dev and one for demo say) my web app to two different ports; mydomain.com:8081 and mydomain.com:8082 I've successfully (back in Jan) used mod_jk to get httpd to serve those same tomcat instances to http://www.mydomain.com:8090/dev and http://www.mydomain.com:8090/demo (8090 cos I've got another app running on 8080 via Jetty at this stage) using the following code in httpd.conf; LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log JkLogLevel debug <VirtualHost *:8090> JkMount /devd* tomcatDev JkMount /demo* tomcatDemo </VirtualHost> What I'm not trying to do is enable SSL I've added the following to httpd.conf Listen 443 <VirtualHost _default_:443> JkMount /dev* tomcatDev JkMount /demo* tomcatDemo SSLEngine on SSLCertificateFile "/opt/httpd/conf/localhost.crt" SSLCertificateKeyFile "/opt/httpd/conf/keystore.key" </VirtualHost> But when I try to restart Apache with "apachectl restart" (yes after shutting down that other app I mentioned so it doesn't toy with https connections) I continuously get the error; "Invalid command 'SSLEngine', perhaps misspelled or defined by a module not included in the server configuration. httpd not running, trying to start" I've looked in the httpd/modules dir and indeed there is no mod_ssl, only mod_jk.so and httpd.exp. I've tried using yum to install mod_ssl, it says its already installed. Indeed I can locate mod_ssl.so in /usr/lib/httpd/modules but this is NOT the path to where I've installed httpd which is /opt/httpd and in fact /usr/lib/httpd contains nothing but the modules dir. Can anyone tell me how to install mod_ssl properly for my installed location of httpd so I can get past this error:

    Read the article

  • PeopleSoft 9.2 Financial Management Training – Now Available

    - by Di Seghposs
    A guest post from Oracle University.... Whether you’re part of a project team implementing PeopleSoft 9.2 Financials for your company or a partner implementing for your customer, you should attend some of the new training courses.  Everyone knows project team training is critical at the start of a new implementation, including configuration training on the core application modules being implemented. Oracle offers these courses to help customers and partners understand the functionality most relevant to complete end-to-end business processes, to identify any additional development work that may be necessary to customize applications, and to ensure integration between different modules within the overall business process. Training will provide you with the skills and knowledge needed to ensure a smooth, rapid and successful implementation of your PeopleSoft applications in support of your organization’s financial management processes - including step-by-step instruction for implementing, using, and maintaining your applications. It will also help you understand the application and configuration options to make the right implementation decisions. Courses vary based on your role in the implementation and on-going use of the application, and should be a part of every implementation plan, whether it is for an upgrade or a new rollout. Here’s some of the roles that should consider training: · Configuration or functional implementers · Implementation Consultants (Oracle partners) · Super Users · Business Analysts · Financial Reporting Specialists · Administrators PeopleSoft Financial Management Courses: New Features Course: · PeopleSoft Financial Solutions Rel 9.2 New Features Functional Training: · PeopleSoft General Ledger Rel 9.2 · PeopleSoft Payables Rel 9.2 · PeopleSoft Receivables Rel 9.2 · PeopleSoft Asset Management Rel 9.2 · Expenses Rel 9.2 · PeopleSoft Project Costing Rel 9.2 · PeopleSoft Billing Rel 9.2 · PeopleSoft PS / nVision for General Ledger Rel 9.2 Accelerated Courses (include content from two courses for more experienced team members): · PeopleSoft General Ledger Foundation Accelerated Rel 9.2 · PeopleSoft Billing / Receivables Accelerated Rel 9.2 · PeopleSoft Purchasing / Payable Accelerated Rel 9.2 View PeopleSoft Training Overview Video

    Read the article

  • And We’re Off

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} We are well into Oracle OpenWorld 2012, and what a couple days it has been! From day one and two of the Oracle PartnerNetwork Exchange Program, a jammin’ AfterDark reception atop the Metreon City View Terrance, and some major keynotes around Cloud to go with it. We think it’s safe to say we are off to a running start! With all the excitement buzzing around the floor, we couldn’t help but ask YOU our partners, just what you’re looking forward to the most this week. Is it our Test Fest, or possibly our Social Media Rally Station at the OPN Lounge, or our 40+ general sessions? Whatever it is, we can’t wait to exceed your expectations! Watch this awesome video below to find out what some other OPN partners like you are talking about this week! See you on the Floor,The OPN Communications Team

    Read the article

  • How to conciliate OOAD and Database Design?

    - by user1620696
    Recently I've studied about object oriented analysis and design and I liked a lot about it. In every place I've read people say that the idea is to start with the minimum set of requirements and go improving along the way, revisiting this each iteration and making it better as we contiuously develop and contact the customer interested in the software. In particular, one course from Lynda.com said a lot of that: we don't want to spend a lot of time planing everything upfront, we just want to have the minimum to get started and then improve this each iteration. Now, I've also seem a course from the same guy about database design, and there he says differently. He says that although when working with object orientation he likes the agile iterative approach, for database design we should really spend a lot of time planing things upfront instead of just going along the way with the minimum. But this confuses me a little. Indeed, the database will persist important data from our domain model and perhaps configurations of the software and so on. Now, if I'm going to continuously revist the analysis and design of the model, it seems the database design should change also. In the same way, if we plan all the database upfront it seems we are also planing all the model upfront, so the two ideas seems to be incompatible. I really like agile iterative approach, but I'm also looking at getting better design for the database also, so when working with agile iterative approach, how should we deal with the database design?

    Read the article

  • Unable to install Win XP nor Win 7 after installing Ubuntu 11.1

    - by Pablo C. Garcia
    I'll try to make this scenario as clear as possible. Laptop Specs HP dv6-2189la: 500 HDD 4GB Ram Intel i7 Personal Specs - Linux newbie running for the first time. Quite confused :( I had Windows 7 x64, decided to start fresh new so I planned on formatting. Since I use it for work and didn't require it for another week, I didn't rush into installing Win 7 immediately as I wanted to try Ubuntu for quite a while. 1) Downloaded Ubuntu 11.1 2) Burned ISO to CD 3) Installed Ubuntu using the full HDD of 500GB erasing Win7 4) Ubuntu ran awesome (especially for me being a Linux Newbie from scratch) I used Ubuntu for a while, but now I need to get back to work with Win 7. Tried running the installation CD for Win 7 and it just skips to Ubuntu without loading. Checked BIOS, tried other discs, even tried the disc on another computer and it works. Since that didn't work, I tried running Win XP. This CD does load, it starts loading files, drives, kernel, blah blah and before even getting to install it Blue screens with error 0x0000007b. I already used Gparted and created up to 250 GB space for Windows. Formatted to NTFS. I really don´t know what do now. I've tried almost everything I know within my knowledge. I could say I'm an advanced PC user, but I bumped into the Linux wall starting from scratch. All suggestions will be appreciated. Thanks!

    Read the article

  • How can I create a 4TB partition on my software RAID5 device?

    - by Kris Harper
    I have set up a RAID5 device with three 2TB hard drives using mdadm. The device was successfully created, but I cannot seem to create a partition on the device. When I try to make an ext3 or ext4 partition via Disk Utility, I get the following error Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/md0, start=0, size=4000526106624, type= Entering MS-DOS parser (offset=0, size=4000526106624) MSDOS_MAGIC found found partition type 0xee => protective MBR for GPT Exiting MS-DOS parser Entering EFI GPT parser GPT magic found partition_entry_lba=2 num_entries=128 size_of_entry=128 Leaving EFI GPT parser EFI GPT partition table detected containing partition table scheme = 3 got it got disk new partition guid '' is not valid type '' for GPT appear to be malformed I have seen this question, but that seems to suggest using gparted to do the partitioning. I'm fine with doing that, but my RAID device doesn't show up in the list of gparted devices. I suspect because this is a RAID and not a regular disk. I have already created a GPT partition table on the device. How can I add a partition to my device?

    Read the article

  • Parallel Debugging

    Using Visual Studio 2010 parallel debugging is easy. Two new debugging windows provide a total view of the internals of your PPL and TPL applications with hints on where to start investigations. These are not mere extensions to VS, but tightly integrated with the rest of the debugger experience, so you don't need to learn many new techniques. Use them in your program to eclipse bugs from existence!One of the most FAQ I receive is links to VS2010 parallel debugging content and rather than keep sending many, I decided to gather them all under one permalink, hence this multi link blog post.- MSDN Magazine article on Parallel Debugging.- Screencast of sample code from the article.- MSDN Walkthrough: Debugging a Parallel Application (VB, C++, C#).- Screencast of walkthrough for Parallel Stacks.- Screencast of walkthrough for Parallel Tasks.- MSDN "How To" on Parallel Tasks.- MSDN "How To" on Parallel Stacks.- Detailed blog post on Parallel Tasks.- Detailed blog post on Parallel Stacks.- Detailed blog post on Parallel Stacks - Tasks View.- Detailed blog post on Parallel Stacks - Method View.- Download slides on Parallel Tasks and Parallel Stacks (pptx).If you have questions on these, please post to any of the parallel computing forums or the debugging forum (your question will be routed to me if nobody else can answer it). Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >