Search Results

Search found 14824 results on 593 pages for 'replication services'.

Page 358/593 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Améliorer la pertinence des réponses aux demandes clients et réduire les DMT*

    - by Valérie De Montvallon
    Le Knowledge Management pour améliorer la pertinence des réponses aux demandes clients et réduire les DMT Avec le témoignage de SFR Lundi 2 juillet de 8h30 à 10h30 à l’Automobile Club de France, Paris Web, appel vocal, rendez-vous en agence, vos clients s'attendent aujourd’hui à obtenir une réponse unique, pertinente et rapide, quel que soit le canal de contact. Vos conseillers clients et vos agents ont besoin d'avoir accès facilement à l'information nécessaire. La volumétrie des données utilisées par les services de relation clients (centres de contacts, vendeurs en magasin, community manager…) est impressionnante, mixant souvent plusieurs sources d’information et nécessitant des recherches sémantiques. Rendez-vous le 2 juillet pour découvrir comment un outil de Knowledge Management permet d’optimiser la pertinence des résultats de recherche. Au cours de cette matinée d’échanges, Jocelyn Aubry, DSI Relation Client Grand Public chez SFR, partagera son expérience d'intégration de la solution Oracle InQuira pour constituer une base de connaissance unique et cross-canal, au service de ses conseillers clients. Inscription : [email protected]

    Read the article

  • Contracting rate for Android development

    - by Robottinosino
    I am a developer, maybe I have a project to work on, not sure how much to "charge" for my work.... Help me find the price? Looking for an contracting rate interval, expressed in EUR, to offer for an Android development gig. Location: Europe, off-site (telecommuting) Environment: Android + HTTP back-end Length of project: 1 month Rate currency: EUR Rate measure: daily or hourly Other skills required: UI design, basic J2EE, web services Client profile: medium business (100+ employees) Previous published apps: 1, on G-Play Design work: partially already done, data flow is established, data models to be built Required graphics/multimedia elements: provided by the client Source control: remove over SSH Team size: one person in development, others (unknown) for marketing, copy, etc. Risk factor: medium Client reputation: modest but established Potential for continued work: absent (for now) Internationalisation/localisation: no Native code (C/C++,...): none, only Android SDK, pure Java Working culture: (based on initial discussion) - high pressure Other projects in the pipe: zero Opportunity to learn from the project or build new skills: 5% Negotiating experience: good Cost of accessories: (licenses, device to debug on) zero If I need to be more precise, please just say how and I will be.

    Read the article

  • Heterogén adatelérés OWB-vel: ODI EE Enterprise ETL

    - by Fekete Zoltán
    Az elozo ketto blogbejegyzéshez kapcsolódva felmerül a kérdés: Hogyan lehet az Oracle Warehouse Builderrel heterogén adatforrásokat elérni? Ajánlott olvasmány: Oracle Warehouse Builder 11gR2: OWB ETL Using ODI Knowledge Modules Természetesen az OWB az Oracle Database Heterogeneous Services-zel ODBC-vel illetve Oracle Gateway-k alkalmazásával eddig is lehetett mindenféle ODBC kompatibilis továbbá mainframe-es adatbázisokat elérni. Oracle Database Gateways: MS SQL Server, Sybase, Teradata, Informix, ODBC, DRDA, APPC, WebSphere MQ, DB2, DB2/400. A megfelelo Application Adapters megvásárlásával lehet csatlakozni az OWB-vel például a következo forrásokhoz: SAP, Oracle E-Business Suite, Peoplesoft, Siebel, Oracle Customer Data Hub (CDH), Universal Customer Master (UCM), Product Information Management (PIM). Az OWB 11gR2-tol kezdve az OWB tudja használni az Oracle Data Integrator Knowledge moduljait a heterogén adatelérésre, ez JDBC-vel illetve más heterogén elérési módokkal. Ajánlott olvasmány: Oracle Warehouse Builder 11gR2: OWB ETL Using ODI Knowledge Modules Letöltés: Oracle Warehouse Builder. BTW az OWB Java-s kliens szoftver Linux-on és Windows-on is használható. A szerver oldal pedig természetesen az Oracle adatbázisban fut: Solaris, Linux, HP-UX, AIX, Windows operációs rendszereken.

    Read the article

  • 4 Alternatives to Google Drive for Linux

    - by Chris Hoffman
    We’ve covered using Google Drive on Linux with third-party software, but why bother jumping through those hoops? You can use a cloud storage service that officially supports Linux instead – several of Google Drive’s competitors do. Google may be leaving Linux users out, but other services like Dropbox, Ubuntu One, SpiderOak, and Wuala don’t ignore Linux users. They even offer more storage and other useful features, such as local encryption of your files. Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • PeopleSoft at Alliance 2012 Executive Forum

    - by John Webb
    Guest Posting From Rebekah Jackson This week I jointed over 4,800 Higher Ed and Public Sector customers and partners in Nashville at our annual Alliance conference.   I got lost easily in the hallways of the sprawling Gaylord Opryland Hotel. I carried the resort map with me, and I would still stand for several minutes at a very confusing junction, studying the map and the signage on the walls. Hallways led off in many directions, some with elevators going down here and stairs going up there. When I took a wrong turn I would instantly feel stuck, lose my bearings, and occasionally even have to send out a call for help.    It strikes me that the theme for the Executive Forum this year outlines a less tangible but equally disorienting set of challenges that our higher education customer’s CIOs are facing: Making Decisions at the Intersection of Business Value, Strategic Investment, and Enterprise Technology. The forces acting upon higher education institutions today are not neat, straight-forward decision points, where one can glance to the right, glance to the left, and then quickly choose the best course of action. The operational, technological, and strategic factors that must be considered are complex, interrelated, messy…and the stakes are high. Michael Horn, co-author of “Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns”, set the tone for the day. He introduced the model of disruptive innovation, which grew out of the research he and his colleagues have done on ‘Why Successful Organizations Fail’. Highly simplified, the pattern he shared is that things start out decentralized, take a leap to extreme centralization, and then experience progressive decentralization. Using computers as an example, we started with a slide rule, then developed the computer which centralized in the form of mainframes, and gradually decentralized to mini-computers, desktop computers, laptops, and now mobile devices. According to Michael, you have more computing power in your cell phone than existed on the planet 60 years ago, or was on the first rocket that went to the moon. Applying this pattern to Higher Education means the introduction of expensive and prestigious private universities, followed by the advent of state schools, then by community colleges, and now online education. Michael shared statistics that indicate 50% of students will be taking at least one on line course by 2014…and by some measures, that’s already the case today. The implication is that technology moves from being the backbone of the campus, the IT department’s domain, and pushes into the academic core of the institution. Innovative programs are underway at many schools like Bellevue and BYU Idaho, joined by startups and disruptive new players like the Khan Academy.   This presents both threat and opportunity for higher education institutions, and means that IT decisions cannot afford to be disconnected from the institution’s strategic plan. Subsequent sessions explored this theme.    Theo Bosnak, from Attain, discussed the model they use for assessing the complete picture of an institution’s financial health. Compounding the issue are the dramatic trends occurring in technology and the vendors that provide it. Ovum analyst Nicole Engelbert, shared her insights next and suggested that incremental changes are no longer an option, instead fundamental changes are affecting the landscape of enterprise technology in higher ed.    Nicole closed with her recommendation that institutions focus on the trends in higher education with an eye towards the strategic requirements and business value first. Technology then is the enabler.   The last presentation of the day was from Tom Fisher, Sr. Vice President of Cloud Services at Oracle. Tom runs the delivery arm of the Cloud Services group, and shared his thoughts candidly about his experiences with cloud deployments as well as key issues around managing costs and security in cloud deployments. Okay, we’ve covered a lot of ground at this point, from financials planning, business strategy, and cloud computing, with the possibility that half of the institutions in the US might not be around in their current form 10 years from now. Did I forget to mention that was raised in the morning session? Seems a little hard to believe, and yet Michael Horn made a compelling point. Apparently 100 years ago, 8 of the top 10 education institutions in the world were German. Today, the leading German school is ranked somewhere in the 40’s or 50’s. What will the landscape be 100 years from now? Will there be an institution from China, India, or Brazil in the top 10? As Nicole suggested, maybe US parents will be sending their children to schools overseas much sooner, faced with the ever-increasing costs of a US based education. Will corporations begin to view skill-based certification from an online provider as a viable alternative to a 4 year degree from an accredited institution, fundamentally altering the education industry as we know it?

    Read the article

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • Oracle Transportation Management Annual Customer Conference

    - by [email protected]
    The 2010 Oracle Transportation Management (OTM) Conference will be held June 13-16 in Philadelphia, Pennsylvania. The conference brings together all things OTM: users, prospective users, development personnel, product strategy, implementation experts, and software and services partners.  With over 200 attendees, this conference is the premiere location and time to learn about OTM, build relationships with peers, and get answers to all your OTM questions.    This year's conference will be held at the: Sheraton Society Hill, One Dock St., Philadelphia, PA. 19106Companies speaking at this year's event include:AT&T Land O Lakes BlueScope Steel Baillie Lumber Kraft Sears Roseburg Forest Products Toyota Beckman CoulterLevi StraussNiagara BottlingSmurfit StonePQ CorporationOffice Depot               To register click here http://www.otmconference.com/ConfAgenda.aspx.  1

    Read the article

  • Upgrading SSIS Custom Components for SQL Server 2012

    Having finally got around to upgrading my custom components to SQL Server 2012, I thought I’d share some notes on the process. One of the goals was minimal duplication, so the same code files are used to build the 2008 and 2012 components, I just have a separate project file. The high level steps are listed below, followed by some more details. Create a 2012 copy of the project file Upgrade project, just open the new project file is VS2010 Change target framework to .NET 4.0 Set conditional compilation symbol for DENALI Change any conditional code, including assembly version and UI type name Edit project file to change referenced assemblies for 2012 Change target framework to .NET 4.0 Open the project properties. On the Applications page, change the Target framework to .NET Framework 4. Set conditional compilation symbol for DENALI Re-open the project properties. On the Build tab, first change the Configuration to All Configurations, then set a Conditional compilation symbol of DENALI. Change any conditional code, including assembly version and UI type name The value doesn’t have to be DENALI, it can actually be anything you like, that is just what I use. It is how I control sections of code that vary between versions. There were several API changes between 2005 and 2008, as well as interface name changes. Whilst we don’t have the same issues between 2008 and 2012, I still have some sections of code that do change such as the assembly attributes. #if DENALI [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2012")] [assembly: AssemblyCopyright("Copyright © 2012 Konesans Ltd")] [assembly: AssemblyVersion("3.0.0.0")] #else [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2008")] [assembly: AssemblyCopyright("Copyright © 2008 Konesans Ltd")] [assembly: AssemblyVersion("2.0.0.0")] #endif The Visual Studio editor automatically formats the code based on the current compilation symbols, hence in this case the 2008 code is grey to indicate it is disabled. As you can see in the previous example I have distinct assembly version attributes, ensuring I can run both 2008 and 2012 versions of my component side by side. For custom components with a user interface, be sure to update the UITypeName property of the DtsTask or DtsPipelineComponent attributes. As above I use the conditional compilation symbol to control the code. #if DENALI [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=3.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2012 Konesans Ltd; http://www.konesans.com" )] #else [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=2.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2004-2008 Konesans Ltd; http://www.konesans.com" )] #endif public sealed class FileWatcherTask: Task, IDTSComponentPersist, IDTSBreakpointSite, IDTSSuspend { // .. code goes on... } Shown below is another example I found that needed changing. I borrow one of the MS editors, and use it against a custom property, but need to ensure I reference the correct version of the MS controls assembly. This section of code is actually shared between the 2005, 2008 and 2012 versions of my component hence it has test for both DENALI and KATMAI symbols. #if DENALI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=11.0.00.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #elif KATMAI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #else const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #endif // Create Match Expression parameter IDTSCustomPropertyCollection100 propertyCollection = outputColumn.CustomPropertyCollection; IDTSCustomProperty100 property = propertyCollection.New(); property = propertyCollection.New(); property.Name = MatchParams.Name; property.Description = MatchParams.Description; property.TypeConverter = typeof(MultilineStringConverter).AssemblyQualifiedName; property.UITypeEditor = multiLineUI; property.Value = MatchParams.DefaultValue; Edit project file to change referenced assemblies for 2012 We now need to edit the project file itself. Open the MyComponente2012.cproj  in you favourite text editor, and then perform a couple of find and replaces as listed below: Find Replace Comment Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Change the assembly references version from SQL Server 2008 to SQL Server 2012. Microsoft SQL Server\100\ Microsoft SQL Server\110\ Change any assembly reference hint path locations from from SQL Server 2008 to SQL Server 2012. If you use any Build Events during development, such as copying the component assembly to the DTS folder, or calling GACUTIL to install it into the GAC, you can also change these now. An example of my new post-build event for a pipeline component is shown below, which uses the .NET 4.0 path for GACUTIL. It also uses the 110 folder location, instead of 100 for SQL Server 2008, but that was covered the the previous find and replace. "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\gacutil.exe" /if "$(TargetPath)" copy "$(TargetPath)" "%ProgramFiles%\Microsoft SQL Server\110\DTS\PipelineComponents" /Y

    Read the article

  • Project planning and customer tracking system

    - by Daniel Hollands
    First off, sorry if this is the wrong 'stack' site, but it seemed like a good place to start. I'm happy to report that my services as a web developer are starting to be in quite a lot of demand, and I have a few existing and potentially new customers all lining up - but I'm finding it very hard to keep track of everything. What I'm hoping for is some (preferably web-based) system which I can use to keep track of who my customers are, the various projects that I've got going on for them, and (if possible) the individual sub-tasks that make up each project. What would be even better is if the relevant customer was able to log into the site, and see the process of their projects. I do hope you know what I'm talking about, and that you'll be able to offer some suggestions of either web-base sites that offer something along these lines, or of some open source solution or something like that? Thank you

    Read the article

  • Oracle announces Brand New Tuxedo 11g Release

    - by ruma.sanyal
    Today Oracle introduced two brand new products within the Tuxedo product line of its application grid portfolio. Oracle Tuxedo Application Runtime for CICS and Batch and Oracle Application Rehosting Workbench provide the ability to automate rehosting of mainframe Online and Batch applications to open systems running under Oracle Tuxedo. Oracle Application Rehosting Workbench automates adaptation of COBOL programs, JCL conversion for batch applications, and migration of VSAM files and DB2 data schema. Migration cost, risk, and project length and complexity are dramatically reduced with over 90% of application assets re-hosted on open systems 'as-is'. Impact on the organization is minimized - users are protected from change by support for 3270 green screens, and developers continue to use familiar CICS APIs, batxh functions, and common utilities. Other major features of this release are as follows: - Hotpluggability through introduction of Oracle Tuxedo JCA Adapter - Metadata driven application development using SCA programming model - Support for Python and Ruby languages to develop business services - Improved scalability and availability, TSAM enhancements Register for a live webinar with Oracle Fusion Middleware Senior VP Hasan Rizvi Read the press release Find more details on these exciting new products

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • What reasons are there for not using a third party version control service?

    - by Earlz
    I've recently noticed a bit of a trend for my projects as of late. I use to run my own SVN server on my VPS, but recently the nail went in the coffin for that when I got my last project migrated from my server to a Mercurial repo on Bitbucket. What are some of the ramifications to this? (disregarding the change in version control systems) It seems like there has been a huge explosion in version control hosting, and companies like Bitbucket even offer private repos for free, and Github and other such services are extremely cheap now. Also, by using them you get the benefit of their infrastructure's speed and stability. What reasons are there these days to host your own version control? The only real reason I can think of is if your source code is super top secret.

    Read the article

  • I'm Seeing Red

    - by Grant Fritchey
    Hello World! My move into the world of Red Gate is more and more complete with my shiny, new, red, blog. The goal of this blog is not to compete with, or replace, my blog over at ScaryDBA. Instead, this blog is where I can share things I find about Red Gate products and services. I can talk about the things that we're doing at Red Gate. I can talk about the things I'm doing at Red Gate. In short, this is my Red Gate blog. I'm still the Scary DBA, but over here, I'm painted bright red (and no, I was promised that no pictures were taken of that process). So look for tips and suggestions about Red Gate products, methods to help you do your job better using one of our tools, and anything else I can think of or comment on that supports you and our excellent software.

    Read the article

  • Identity R2 in London Oct 24th with Amit Jasuja

    - by Naresh Persaud
    Join Amit Jasuja, Senior Vice President Identity Management and Security, Oracle, and Peter Boyle, Head of Identity Services, BT in London on 24th October 2012 for the UK launch of Oracle Identity Management 11gR2. You’ll learn more about the evolution of this exceptional business solution and get the unique opportunity to network with existing Oracle customers and speak directly with Oracle product experts The agenda includes: An overview of capabilities Product demonstrations Customer presentations An interactive panel discussion Amit Jasuja will also be available for 1:1 meetings. Please email [email protected] to request a meeting with Amit. Click here to Register. 

    Read the article

  • Preview - Profit, May 2010

    - by Aaron Lazenby
    Whew! Last Friday, we put the finishing touches on the May 2010 edition of Profit, Oracle's quarterly business and technology journal. The issue will be back from the printer and live on the website in mid-April. Here's a preview: 0 0 0 Turning Crisis into OpportunityDuring the depths of the financial crisis, San Francisco California-based Wells Fargo &Company launched a bold acquisition of Wachovia Bank--one of the largest financial services mergers in history. Learn how Oracle software helped Wells Fargo CFO Howard Atkins prepare his office for the merger--and assisted with the integration of the companies once the deal was done.Building on SuccessGlobal construction firm Hill International takes project management to new heightswith Oracle's Primavera solutions.?Product Management, In Black and whiteCatch up with Zebra Technologies to see how Oracle's Agile applications connectwith an existing Oracle E-Business Suite system. A Perfect MatchLearn how technology makes good medicine in this interview with National MarrowDonor Program CIO Michael Jones. The IT Ties the BindHow information systems are help­ing manage knowledge workers in a post-9-to-5work world.I'll post a link to the new edition once it's live. Hope you enjoy!

    Read the article

  • Web Developer or Software Engineer?

    - by Deepesh
    A question that I have been asking myself and really confused which path to take. So I need your guys help as to the pros and cons of these 2 professions in today's world. I love web applications development as the Web is the best thing to happen in this age and nearly everyone gets by on the World Wide Web. And also tend to keep learning about new technologies and about web services. On the other hand I like software engineering also for the desktop applications as I have had experience with development small scale softwares in VB.Net, Java, C++, etc. Which path has more scope and better future? Whats your view?

    Read the article

  • Render Ruby object to interactive html

    - by AvImd
    I am developing a tool that discovers network services enabled on host and writes short summary on them like this: init,1 +-- login,1560 -- +-- bash,1629 +-- nc,12137 -lup 50505 { :net = [ [0] "*:50505 IPv4 UDP " ], :fds = [ [0] "/root (cwd)", [1] "/", [2] "/bin/nc.traditional", [3] "/xochikit/ld_poison.so (stat: No such file or directory)", [4] "/dev/tty2", [5] "*:50505" ] } It proved to be very nice formatted and useful for quick discovery thanks to colors provided by the awesome_print gem. However, its output is just a text. One issue is that if I want to share it, I lose colors. I'd also like to fold and unfold parts of objects, quickly jump to specific processes and what not? Adding comments, for example. Thus I want something web-based. What is the best approach to implement features like these? I haven't worked with web interfaces before and I don't have much experience with Ruby.

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • New release of &quot;OLAP PivotTable Extensions&quot;

    - by Luca Zavarella
    For those who are not familiar with this add-in, the OLAP PivotTable Extensions add features of interest to Excel 2007 or 2010 PivotTables pointing to an OLAP cube in Analysis Services. One of these features I like very much, is to know the MDX query code associated with the pivot used at that time in Excel: You can find all the details here: http://olappivottableextend.codeplex.com/ It was recently released a new version of the add-in (version 0.7.4), which does not introduce any new features, but fixes a significant bug: Release 0.7.4 now properly handles languages but introduces no new features. International users who run a different Windows language than their Excel UI language may be receiving an error message when they double click a cell and perform drillthrough which reads: "XML for Analysis parser: The LocaleIdentifier property is not overwritable and cannot be assigned a new value". This error was caused by OLAP PivotTable Extensions in some situations, but release 0.7.4 fixes this problem. Enjoy!

    Read the article

  • recurring billing / profiles management system

    - by Karl Cassar
    As a company, we have various recurring fees which our clients pay - these can include: hosting plans maintenance agreements SLAs ... I would like to know if anyone knows of a good, web-based recurring billing / payments management system which we could use to help us get more organised regarding this aspect of our business. Basically, we would need to: Create recurring profiles, e.g: Hosting, emails / domain services @ 200eur / year Be able to give free / extend the subscription period, for any reason. Also, we don't have specific products which we would like to choose and charge - all these recurring fees are discussed with the clients, and are created on a per-client basis. I'm not sure if this is the best place to ask for, however since I think most 'webmasters' require such a system to keep track of payment, I thought this would be the place to go for. Thanks in advance!

    Read the article

  • Modular enterprise architecture using MVC and Orchard CMS

    - by MrJD
    I'm making a large scale MVC application using Orchard. And I'm going to be separating my logic into modules. I'm also trying to heavily decouple the application for maximum extensibility and testability. I have a rudimentary understanding of IoC, Repository Pattern, Unit of Work pattern and Service Layer pattern. I've made myself a diagram. I'm wondering if it is correct and if there is anything I have missed regarding an extensible application. Note that each module is a separate project. Update So I have many UI modules that use the db module, that's why they've been split up. There are other services the UI modules will use. The UI modules have been split up because they will be made over time, independent of each other.

    Read the article

  • OBIEE 11g 11.1.1.7.1 is Available For BI Enterprise

    - by p.anda
    (in via Ian) The Business Intelligence Enterprise Edition (OBIEE) 11.1.1.7.1 patch set has been released.  This patch set is available for all customers who are using Oracle Business Intelligence Enterprise Edition 11.1.1.7.0. Now available to download from My Oracle Support Patch 16556157: OBIEE BUNDLE PATCH 11.1.1.7.1 This single OBIEE 11.1.1.7.1 patch set download is comprised of the following: 1 of 6 Oracle Business Intelligence Installer (BIINST) 2 of 6 Oracle Business Intelligence Publisher (BIP) 3 of 6 EPM Components Installed from BI Installer 11.1.1.7.0 (BIFNDNEPM)) 4 of 6 Oracle Business Intelligence Server (BIS) 5 of 6 Oracle Business Intelligence Presentation Services (BIPS) 6 of 6 Oracle Business Intelligence Platform Client Installers and MapViewer Ensure to review the readme file on the Installer download for important installation instructions The following is also required to be downloaded and applied: Patch 16569379: Dynamic Monitoring Service patch Additional important notes are available in  the following document: Document 1566124.1: OBIEE 11g 11.1.1.7.1 is Available for Oracle Business Intelligence Enterprise Edition

    Read the article

  • Internet Exploror 9 Support for EPM Products

    - by THE
    (source: Greg)  Internet Explorer 9 support for Enterprise Performance Management  11.1.2.1 products:With release of Patch Set Update 11.1.2.1.600 Internet Explorer 9 is now a supported browser for EPM products. This PSU has been released for: Shared Services <Patch 14142678> Workspace <Patch 14119724> Enterprise Performance Management Architect EPMA <Patch 14224664> Planning <Patch 14464109> Hyperion Financial Management HFM <Patch 14354419> For full information regarding IE compatibility with EPM products please review  Doc ID 1355681.1Enterprise Performance Management Products Compatibility with Internet Explorer ( IE6, IE7, IE8 and IE9).For the list of latest PSUs for EPM products please see ( Doc ID 1400559.1) Available Patch Sets and Patch Set Updates for Oracle Hyperion Enterprise Performance Management Products.

    Read the article

  • How to make creating viewmodels at runtime less painful

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painful. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genius since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use a DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an in-depth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >