Search Results

Search found 3276 results on 132 pages for 'cost'.

Page 51/132 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Using YouTube as a CDN

    - by Syed
    Why isn't YouTube used as a CDN for video and audio files? Through YouTube's api and developer tools, it would be possible to post all media files to YouTube from a CMS and then make a call to them when needed. This seems like it's within YouTube's TOS, it's a cost-effective way to store, retrieve, and distribute media files, and it could also make for easy monetization. I ask because I'm working on a new project for a public radio station. I can't figure out the real downside to this sort of an implementation.

    Read the article

  • Roll Your Own Passive 3D Movie System with Dual Projectors

    - by Jason Fitzpatrick
    If you’d like to enjoy 3D movies with passive polarized glasses for less than $50,000 (the average price of a passive 3D projector), this DIY setup brings the price down to a more accessible level. Courtesy of 3D movie and theater enthusiast Jahun, this guide details how you can achieve passive 3D projection using two radically less expensive projectors, cheap polarized filters, and some software. The project won’t be free-as-in-beer but with some careful shopping the bill will ring up at the thousands instead of tens-of-thousands of dollars. Hit up the link below to see how he pulled off miming a $50,000 projector for less than a tenth the cost. Passive Projection [via Hack A Day] How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • Establishing relationships with unsolicited recruiters

    - by Michael
    Several times each year, I receive unsolicited introductions from tech recruiters, usually via LinkedIn and usually local firms. I am not currently looking for a new job. Is it advisable to establish a relationship with one or more recruiters when I'm not interested in finding new work, so that they have my resume on file? Here's another way I approach the question: My plumber was first hired at the point of need: I had a plumbing problem, looked up a few of them, and evaluated and hired based on their demeanor and cost estimates. I established a relationship with a general attorney, on the other hand, well in advance of ever actually needing services so that if or when services are needed he already knows enough about me to begin work. Should I approach recruiters like I approached my plumber or my lawyer? A separate discussion, I suppose, is whether or not the type of recruiters who troll LinkedIn for clients are generally helpful or not. Edit: I have never worked with a recruiter before, and therefore have little idea what to expect.

    Read the article

  • Why not AJAX'ify entire websites?

    - by Anonymous -
    Is there any solid reasoning as to why sites shouldn't be developed with ajax functionality that loads major parts of each part (assuming there are elements like the header, navigation etc that remain the same)? Surely it would be less resource-intensive since the server wouldn't have to serve content that appears on every page, benefiting both the host and end-user. Answer the question taking into consideration: The sites javascript behaviour degrades gracefully in every instance For my question I'm talking about new sites where this behaviour could be implemented rather from the off, so it doesn't technically cost any money - we're not returning to a finished product to implement it.

    Read the article

  • How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    - by Chris Hoffman
    If you don’t have a touchscreen computer and spend all your time on the desktop, Windows 8’s new interface can seem intrusive. Microsoft won’t allow you to disable the new interface, but Classic Shell provides the options Microsoft didn’t. In addition to providing a Start button, Classic Shell can take you straight to the desktop when you log in and disable the hot corners that activate the charms and metro app switcher. There are other programs that do this, but Classic Shell is free and open-source. Many of the alternatives, such as Start8 and RetroUI, are commercial apps that cost money. We’ve covered Classic Shell in the past, but it’s come a long way since then. How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • DDD and filtering

    - by tikhop
    I am developing an app in ddd maner. So I have a complex domain model. Suppose I have a Fare object and Airline. Each Airline should contain several or much more Fares. My UI should represent Model (only small part of complex model) as a list of Airline, when the user select the Airline, I must show the list of Fares. User can filtering the Fares (by travel time, cost, etc.). What is the appropriate place for filtering Fares and Airlines? I am assuming that I should do it in ViewModel. Like: My domain model has wrapped with Service Layer - UI works with ViewModel - ViewModel obtain data from Service Layer filtering it and create DTO objects for UI. Or I'm wrong?

    Read the article

  • Triangle Strips and Tangent Space Normal Mapping

    - by Koarl
    Short: Do triangle strips and Tangent Space Normal mapping go together? According to quite a lot of tutorials on bump mapping, it seems common practice to derive tangent space matrices in a vertex program and transform the light direction vector(s) to tangent space and then pass them on to a fragment program. However, if one was using triangle strips or index buffers, it is a given that the vertex buffer contains vertices that sit at border edges and would thus require more than one normal to derive tangent space matrices to interpolate between in fragment programs. Is there any reasonable way to not have duplicate vertices in your buffer and still use tangent space normal mapping? Which one do you think is better: Having normal and tangent encoded in the assets and just optimize the geometry handling to alleviate the cost of duplicate vertices or using triangle strips and computing normals/tangents completely at run time? Thinking about it, the more reasonable answer seems to be the first one, but why might my professor still be fussing about triangle strips when it seems so obvious?

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • More Collateral to drive Remarketer Activity

    - by martin.morganti(at)oracle.com
    Over the last few months we have added a range of marketing materials to the Remarketer Level pages including an extensive set of Marketing Kits to support the Solution Kits previously available. As part of the continuing program, we have just added some additional training for Remarketers at no cost. In addition to the Oracle 1-Click Technology Products Guided Learning Path which explains about the program and how to position the products, we have added Oracle Database 11g 1-Click Technology Sales Guided Learning Path. This Learning path allows Remarketers to get access to more detailed training on the Oracle 11G database and so develop their understanding of the opportunities they could be benefiting from today, by reselling the 1-click products. This is in direct response to requests to make more training available to the Remarketers, so take the opportunity to let your Remarketer customers and prospects know about this additional training and how they can Jump Start a resell business with Oracle today with No Fees, No Barriers and No Excuses.

    Read the article

  • Webinar: Integrated Sales & Marketing - An Impossible Dream?

    - by charles.knapp
    Are you making the most of the latest B2B marketing thinking? Are your marketing tactics, your outbound email campaigns and your SEO generating enough of the prospects and leads that your sales teams need? Are your sales and marketing functions aligned and working together with optimised results? In this Webinar with MarketingWeek Magazine, find out how: - To ensure your marketers create and deliver consistently effective, and targeted campaigns - You can triple the customer intelligence your marketers gather, ensuring your sales teams are better informed and qualified than ever before - Generate up to 200% growth in lead volume and start measuring marketing effectiveness against increase in sales and size of an average deal - And hear how BPI OnDemand has delivered integrated sales & marketing across industries, with results such as 100% ROI on system cost for Heal's after just one campaign

    Read the article

  • DCOGS Balance Breakup Diagnostic in OPM Financials

    - by ChristineS-Oracle
    Purpose of this diagnostic (OPMDCOGSDiag.sql) is to identify the sales orders which constitute the Deferred COGS account balance.This will help to get the detailed transaction information for Sales Order/s Order Management, Account Receivables, Inventory and OPM financials sub ledger at the Organization level.  This script is applicable for various scenarios of Standard Sales Order, Return Orders (RMA) coupled with all the applicable OPM costing methods like Standard, Actual and Lot costing.  OBJECTIVE: The sales order(s) which are at different stages of their life cycle in one spreadsheet at one go. To collect the information of: This will help in: Lesser time for data collection. Faster diagnosis of the issue. Easy collaboration across different modules like  Order Management, Accounts Receivables, Inventory and Cost Management.  You can download the script from Doc ID 1617599.1 DCOGS Balance Breakup (SO/RMA) and Diagnostic Analyzer in OPM Financials.

    Read the article

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • Are There Realistic/Useful Solutions for Source Control for Ladder Logic Programs

    - by Steven A. Lowe
    Version control for ladder logic (LL) programs for programmable logic controllers (PLCs) seems to be virtually non-existent. It may be because LL is a visual language and tends to be stored in binary files, or it may be because source code control hasn't "caught on" in process control engineering circles - or perhaps my Google-Fu is weak tonight. Do you know of any realistic and useful solutions for version control for such systems? Definitions: realistic = changes to the programs are tracked by user and subject to reversion and merges useful = the system integrates with visual LL designers, is not limited to LL from a single PLC manufacturer, and does not cost a ridiculous amount of money? Note: I have heard of people using SVN or Mercurial et al to track the binary files, but I don't think the diff/merge capabilities would display readable differences.

    Read the article

  • Can the overuse of custom taglibs disrupt the outsourcing of html designers?

    - by Renato Gama
    Yesterday me and a friend were talking about the overuse of custom taglibs! We create taglibs for everything! We create taglibs in order to wrap jQuery UI elements (tabs, button, etc), and other plugins elements as well. We often wrap them together in a single component. We use taglibs in a point that we almost have no pure html within the body tag. Our question is: is this a healthy habit??? Imagine two situations: 1) We hire an html designer and have the cost of a month for him to learn all this stuff. 2) We want to outsource the html development but no company would get our taglib library to learn, OR it become more expensive. We love taglibs as its been a lovely shortcut for javascipt development as we write it only once. What would be the best practices in this sense, and what would you suggest? We are looking for a future-proof solution (or an argument that agrees with ours).

    Read the article

  • How to Seamlessly Extend the Windows Server Trial to 240 Days

    - by Jason Faulkner
    The Microsoft evaluation releases of their products are incredibly valuable and useful tools as they allow you to have an unlimited number of test, demo and development environments to work with at no cost. The only catch is evaluation releases are time limited, so the more time you can squeeze out of them, the more useful they can be. Here we are going to show you how to extend the usage time of the Windows Server 2008 R2 evaluation release to its maximum. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Dynamic Dijkstra

    - by Dani
    I need dynamic dijkstra algorithm that can update itself when edge's cost is changed without a full recalculation. Full recalculation is not an option. I've tryed to "brew" my own implemantion with no success. I've also tryed to find on the Internet but found nothing. A link to an article explaining the algorithm or even it's name will be good. Edit: Thanks everyone for answering. I managed to make algorithm of my own that runs in O(V+E) time, if anyone wishes to know the algorithm just say so and I will post it.

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Payback Is The Coupon King

    - by Troy Kitch
    PAYBACK GmbH operates the largest marketing and couponing platforms in the world—with more than 50 million subscribers in Germany, Poland, India, Italy, and Mexico.  The Security Challenge Payback handles millions of requests for customer loyalty coupons and card-related transactions per day under tight latency constraints—with up to 1,000 attributes or more for each PAYBACK subscriber. Among the many challenges they solved using Oracle, they had to ensure that storage of sensitive data complied with the company’s stringent privacy standards aimed at protecting customer and purchase information from unintended disclosure. Oracle Advanced Security The company deployed Oracle Advanced Security to achieve reliable, cost-effective data protection for back-up files and gain the ability to transparently encrypt data transfers. By using Oracle Advanced Security, organizations can comply with privacy and regulatory mandates that require encrypting and redacting (display masking) application data, such as credit cards, social security numbers, or personally identifiable information (PII). Learn more about how PAYBACK uses Oracle.

    Read the article

  • Partitioning tutorial - new features in Oracle Database 12c

    - by KLaker
    For data warehousing projects Oracle Partitioning really is a must-have feature because it delivers so many important benefits such as: Dramatically improves query performance and speeds up database maintenance operations Lowers costs by enabling a tiered storage approach that allows data to be stored on the most cost-effective storage for better resource utilisation Combined with Oracle Advanced Compression, it provides an automated approach to information lifecycle management using a simple, efficient, yet powerful way to manage data growth and reduce complexity and costs To help you get the most from partitioning we have released a new tutorial that covers the 12c new features. Topics include how to: Use Interval Reference Partitioning Perform Cascading TRUNCATE and EXCHANGE Operations Move Partitions Online Maintain Multiple Partitions Maintain Global Indexes Asynchronously Use Partial Indexes For more information about this tutorial follow this link to the Oracle Learning Library: http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:8408,2 where you can begin your tutorial right now! For more information about Oracle Partitioning visit our home page on OTN: http://www.oracle.com/technetwork/database/bi-datawarehousing/dbbi-tech-info-part-100980.html

    Read the article

  • Live Webcast, Dec. 6: Enterprise Clouds with Oracle VM

    - by Monica Kumar
    Mark your calendar! On Tuesday, Dec. 6th at 9am PT, we are hosting a live webcast with Oracle VM experts. Enterprise Clouds with Oracle VM Tuesday, Dec. 6 at 9 AM US PT The ability to create a cloud leveraging public or private infrastructure has been hampered by the lack of availability of practical, cost-effective choices for server virtualization. In this session, you will learn how Oracle provides a single virtualization solution for your entire infrastructure, and how Oracle Enterprise Manager and Oracle Virtual Assembly Builder help you manage Oracle Applications across the cloud. Also find out how virtualization was leveraged to transform IT for Oracle University and support more than 350,00 students in more than 40,000 classes each year. Those lessons have paved the path to private cloud computing inside Oracle. Speakers: Adam Hawley, Senior Director of Product Management, Oracle Dan Herrup, Principal Systems Engineer, Oracle Corporate Citizenship Register Now.

    Read the article

  • Brand New Oracle WebLogic 12c Online Launch Event, December 1, 10am PT

    - by B Shashikumar
    The brand new WebLogic 12c will be launched on December 1st with a 2-hour global webcast highlighting salient capabilities and benefits and featuring Hasan Rizvi, SVP, Oracle Fusion Middleware and Java. For the more techie types, the 2nd hour will be a developer focused discussion including multiple demos and live Q&A. Please join us, with your fellow IT managers, architects, and developers, to hear how the new release of Oracle WebLogic Server is: Designed to help you seamlessly move into the public or private cloud with an open, standards-based platform Built to drive higher value for your current infrastructure and significantly reduce development time and cost Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder   

    Read the article

  • Don&rsquo;t Miss &ldquo;Transform Field Service Delivery with Oracle Real-Time Scheduler&rdquo;

    - by ruth.donohue
    Field resources are an expensive element in the service equation. Maximizing the scheduling and routing of these resources is critical in reducing costs, increasing profitability, and improving the customer experience. Oracle Real-Time Scheduler creates cost-optimized plans and schedules for service technicians that increase operational efficiencies and improve margins. It enhances Oracle’s Siebel Field Service with real-time scheduling and dispatch capabilities that ensure service requests are allocated efficiently and service levels are honored. Join our live Webcast to learn how your organization can leverage Oracle Real-Time Scheduler to: Increase operational efficiency with real-time scheduling that enables field service technicians to handle more calls per day and reduce travel mileage Resolve issues faster with dynamic work flows that ensure you have the right technician with the right skill set for the right job Improve the customer experience with real-time planning that optimizes field technician routing, reduces customer wait times, and minimizes missed SLAs Date: Thursday, March 10, 2011 Time: 8:30 am PT / 11:30 am ET / 4:30 pm UK / 5:30 pm CET Click here to register now.   Technorati Tags: Siebel Field Service,Oracle Real-Time Scheduler

    Read the article

  • Using automated bdd-gui-tests to keep user-documentation-screenshots up do date?

    - by k3b
    Are there developpers out there, who (ab)use the CaptureScreenshot() function of their automated gui-tests to also create uptodate-screenshots for the userdocumentation? Background: Whithin the lifetime of an application, its gui-elements are constantly changing. It makes a lot of work to keep the userdocumentation uptodate, especially if the example data in the pictures should match the textual description. If you already have automated bdd-gui-tests why not let them take screenshots at certain points? I am currently playing with webapps in dotnet+specflow+selenium, but this topic also applies to other bdd-engines (JRuby-Cucumber, mspec, rspec, ...) and gui-test-Frameworks (WaitN, WaitR, MsWhite, ....) Any experience, thoughts or url-links to this topic would be helpfull. How is the cost/benefit relation? Is it worth the efford? What are the Drawbacks? See also: Is it practical to retroactively write specifications documenting a system via automated acceptance tests?

    Read the article

  • The Business Case for a Platform Approach

    - by Naresh Persaud
    Most customers have assembled a collection of Identity Management products over time, as they have reacted to industry regulations, compliance mandates and security threats, typically selecting best of breed products.  The resulting infrastructure is a patchwork of systems that has served the short term IDM goals, but is overly complex, hard to manage and cannot scale to meets the needs of the future social/mobile enterprise. The solution is to rethink Identity Management as a Platform, rather than individual products. Aberdeen Research has shown that taking a vendor integrated platform approach to Identity Management can reduce cost, make your IT organization more responsive to the needs of a changing business environment, and reduce audit deficiencies.  View the slide show below to see how companies like Agilent, Cisco, ING Bank and Toyota have all built the business case and embraced the Oracle Identity Management Platform approach. Biz case-keynote-final copy View more PowerPoint from OracleIDM

    Read the article

  • Google I/O 2012 - What's Possible with the Google Drive SDK

    Google I/O 2012 - What's Possible with the Google Drive SDK Nicolas Garnier Partners of Google Drive have already implemented a number of extremely compelling applications that use Google Drive for file storage. Implementing on the Google Drive SDK enables developers to distribute the cost of storage, while also removing the pain of reimplementing file management. In this session, we'll take a look at a number of existing Google Drive SDK implementations with popular apps. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 276 6 ratings Time: 56:25 More in Science & Technology

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >