Search Results

Search found 24253 results on 971 pages for 'multiple monitor'.

Page 728/971 | < Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >

  • Avoiding Mixup of Language Details

    - by DarenW
    Today someone asked me what was wrong with their source code. It was obvious. "Use double equals in place of that single equal in that if statement. Um, i think..." As i remember some languages actually take a single equals for comparison. Since i sometimes forget or mix up the syntax details among the several languages i use, i stepped over to my laptop to try a quickie experiment... It costs a bit of time and is a break in the flow to try "quick" experiments (though maybe the practice is good for memory.) What tips do you have for keeping straight in your mind the syntax (and other) details of multiple languages? (And nowadays, this applies just as well to the many wiki-like markups!)

    Read the article

  • Popularity Algorithm - SQL / Django

    - by RadiantHex
    Hi folks, I've been looking into popularity algorithms used on sites such as Reddit, Digg and even Stackoverflow. Reddit algorithm: t = (time of entry post) - (Dec 8, 2005) x = upvotes - downvotes y = {1 if x > 0, 0 if x = 0, -1 if x < 0) z = {1 if x < 0, otherwise x} log(z) + (y * t)/45000 I have always performed simple ordering within SQL, I'm wondering how I should deal with such ordering. Should it be used to define a table, or could I build an SQL with the ordering within the formula (without hindering performance)? I am also wondering, if it is possible to use multiple ordering algorithms in different occasions, without incurring into performance problems. I'm using Django and PostgreSQL. Help would be much appreciated! ^^

    Read the article

  • "Nearly divisible"

    - by bobobobo
    I want to check if a floating point value is "nearly" a multiple of 32. E.g. 64.1 is "nearly" divisible by 32, and so is 63.9. Right now I'm doing this: #define NEARLY_DIVISIBLE 0.1f float offset = fmodf( val, 32.0f ) ; if( offset < NEARLY_DIVISIBLE ) { // its near from above } // if it was 63.9, then the remainder would be large, so add some then and check again else if( fmodf( val + 2*NEARLY_DIVISIBLE, 32.0f ) < NEARLY_DIVISIBLE ) { // its near from below } Got a better way to do this?

    Read the article

  • Qt: Writing plugins for other applications

    - by Rasmus Faber
    I am writing a plugin for another application. I want to support the plugin on multiple platforms, so I am strongly considering using Qt. The plugin needs to be able to show some basic GUI. The plugin interface does not in any way handle GUI - it is just a simple DLL/shared library specified with a C-header file. Can I use Qt inside such a shared library? The calling application might or might not be using Qt itself. Any hints on what to do? Do I need to run a QApplication event-loop in a separate thread? Or can I just call the event-loop myself while waiting for input? (I only need modal dialogs).

    Read the article

  • How to structure an enterprise MVC app, and where does Business Logic go?

    - by James
    I am an MVC newbie. As far as I can tell: Controller: deals with routing requests View: deals with presentation of data Model: looks a whole lot like a Data Access layer Where does the Business Logic go? Take a large enterprise application with: Several different sources of data (WCF, WebServices and ADO) tied together in a data access layer (useing multiple different DTOs). A lot business logic segmented over several dlls. What is an appropriate way for an MVC web application to sit on top of this (in terms of code and project structure)? The example I have seen where everything just goes in the Model folder don't seem like they are appropriate for very large applications. Thanks for any advice!

    Read the article

  • Web service performance testing plan, Microsoft .NET WS, SQL

    - by zxed
    Trying to answer a question to come up with a testing plan. It has to do with using a website and/or webservice that queries a sql server to get data and display to user. * Solution must be able to handle an estimated 2000 users, approximately 700 concurrent users, 10,000 + website hits a month. Database calls should handle 100,000 queries via the website/webservice a month. The system is used at multiple times during a 24 hour period; however networking and bandwidth traffic decreases after 5 pm * two windows 2003 servers are used, one for web, another for sql. Both are located in the same room. User access is varied and users can be far/near (its a centralized system), users access via www

    Read the article

  • OpenSSL "Seal" in C (or via shell)

    - by chpwn
    I'm working on porting some PHP code to C, that contacts a web API. The issue I've come across is that the PHP code uses the function openssl_seal(), but I can't seem to find any way to do the same thing in C or even via openssl in a call to system(). From the PHP manual on openssl_seal(): int openssl_seal ( string $data , string &$sealed_data , array &$env_keys , array $pub_key_ids ) openssl_seal() seals (encrypts) data by using RC4 with a randomly generated secret key. The key is encrypted with each of the public keys associated with the identifiers in pub_key_ids and each encrypted key is returned in env_keys . This means that one can send sealed data to multiple recipients (provided one has obtained their public keys). Each recipient must receive both the sealed data and the envelope key that was encrypted with the recipient's public key. What would be the best way to implement this? I'd really prefer not to call out to a PHP script every time, for obvious reasons.

    Read the article

  • How/is data shared between fastCGI processes?

    - by Josh the Goods
    I've written a simple perl script that I'm running via fastCGI on Apache. The application loads a set of XML data files which are used to lookup values based upon the the query parameters of an incoming request. As I understand it, if I want to increase the amount of concurrent requests my application can handle I need to allow fastCGI to spawn multiple processes. Will each of these processes have to hold duplicate copies of the XML data in memory? Is there a way to set things up so that I can have one copy of the XML data loaded in memory while increasing the capacity to handle concurrent requests?

    Read the article

  • Do you know any code sharing sites?

    - by jasondavis
    I am always looking to organize and make my resource bookmarks better and easiar to access when I need them. 1 thing I really like is code sharing sites, they let you enter in code and then give you a special link to give a friend or a user on this site even to show then code, This is a very useful tool I believe. So below is my list of code sharing sites, there is 4 on the list and they all have unique features. Some have syntax highlighting for multiple languages, some allow you to save your code as private and only share with the people you give the link to, and some even run the run and output any possible errors. Do you know of any sites like this? if you know of any sites like this for programming code please post it here. http://pastie.org/ http://codepad.org/ http://pastebin.me/ http://jsbin.com/ allows you to auto-insert a javascript library like jquery and test live js code

    Read the article

  • SetSel on EN_SETFOCUS or WM_SETFOCUS doesn't work

    - by Coder
    I've ran into next mystic thing in Winapi/MFC, I have an edit box, contents of which I have to select on Tab, Lclick, Rclick, Mclick and so on. The sort of obvious path is to handle the SETFOCUS message and call SetSel(0, -1), which should select all text. But it doesn't work! What's wrong? I tried googling, everyone seems to override Lclilks or handle SetSel in parent windows, but this is wrong from encapsulation point of view, also multiple clicks (user wants to insert something in the middle of the text) will break, and so on. Why isn't my approach working, I tried like 10 different ways, tried to trap all possible focus messages, looked up info on MSDN, but nothing works as expected. Also, I need to recreate the carret on focus, which also doesn't seem to work. SETFOCUS message gets trapped alright. If I add __asm int 3, it breaks every time. It's the create carret and setsel that gets swallowed it seems.

    Read the article

  • Amount of data stored in session

    - by srinannapa
    What technique should we use to make the httpsession object not loaded heavily with data. Example : Request 1 ---- httpSession loaded with an arraylist of 50,000 different objects. session.setAttribute("data",arraylist); Request 2 ---- httpSession loaded with an arraylist of 40,000 different objects. session.setAttribute("data",arraylist); Assume the server is loaded heavily with multiple sessions and huge data in them. Let us say from my above example request1..1000 at a time. Which means 1000 session objects with huge data. What is the alternate way to solve instead storing them in session like this?

    Read the article

  • Finding the closest grid coordinate to the mouse position with javascript/jQuery

    - by Acorn
    What I'm trying to do is make a grid of invisible coordinates on the page equally spaced. I then want a <div> to be placed at whatever grid coordinate is closest to the pointer when onclick is triggered. Here's the rough idea: I have the tracking of the mouse coordinates and the placing of the <div> worked out fine. What I'm stuck with is how to approach the problem of the grid of coordinates. First of all, should I have all my coordinates in an array which I then compare my onclick coordinate to? Or seeing as my grid coordinates follow a rule, could I do something like finding out which coordinate that is a multiple of whatever my spacing is is closest to the onclick coordinate? And then, where do I start with working out which grid point coordinate is closest? What's the best way of going about it? Thanks!

    Read the article

  • Why do socket.makefile objects fail after the first read for UDP sockets?

    - by Eli Courtwright
    I'm using the socket.makefile method to create a file-like object on a UDP socket for the purposes of reading. When I receive a UDP packet, I can read the entire contents of the packet all at once by using the read method, but if I try to split it up into multiple reads, my program hangs. Here's a program which demonstrates this problem: import socket from sys import argv SERVER_ADDR = ("localhost", 12345) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(SERVER_ADDR) f = sock.makefile("rb") sock.sendto("HelloWorld", SERVER_ADDR) if "--all" in argv: print f.read(10) else: print f.read(5) print f.read(5) If I run the above program with the --all option, then it works perfectly and prints HelloWorld. If I run it without that option, it prints Hello and then hangs on the second read. I do not have this problem with socket.makefile objects when using TCP sockets. Why is this happening and what can I do to stop it?

    Read the article

  • File system to choose for NAS box based on FreeNAS or OpenFiler [closed]

    - by Chris
    I'm a photographer and I have a requirement find a better method of storing my photos other than multiple USB2 drives via USB hubs. Currently I use a Macbook Pro and 6 external drives connected via USB2 or FW800. 3 are a copy of the first three, kept up to day manually by running an rsync backup. I'd like to run a FreeNAS or OpenFiler NAS box using 2TB drives mirrored via software RAID. But - I would like to have the flexibility of also plugging into the drive physically for the faster throughput when necessary. So. My question is, is there a file system that both *nix and Mac OSX will play nice with? Many thanks, Chris.

    Read the article

  • BDE, Delphi, ODBC, SQL Native Client & Dead lock

    - by EspenS
    Hi. We have some Delphi code that uses the BDE to Access SQL Server 2008 through the SQL Server Native Client ODBC driver (2005 version). Our issue is that we're experiencing some deadlock issues in a loop doing inserts to multiple tables. The whole loop is done within a [TDatabase].StartTransaction. Looking at the SQL Server Profiler we clearly see that at one point during the loop the SPID (Session ID?) change, and then we naturally end up with a deadlock. (Both SPID doing inserts to the same table) It seems like the BDE at some point does a second connection to the DB... (Although I would love to skip the BDE, it's currently not possible. ) Anyone with experiences to share?

    Read the article

  • Google Analytics Widget Tracking

    - by Kevin Lamb
    Hello, I have a form that is generated on a customer's website (lets say customer.com) with javascript that a user fills out and it sends to my site (app.com). I would like to be able to provide information to the customer such as how effective their AdWords campaign was and what search engines users used to end up filling out the form. I have started looking at the multiple domain linking option but I am not sure this is the right way. Is there a way to query what search engine and key words they used and pass this along with the form?

    Read the article

  • Windows Autorun for an HTML file

    - by maestrojed
    I have a html file on a flash drive that I would like to autorun in Windows. I have found examples of multiple ways to do this but none of them are working for me. Anyone see what I am doing wrong? This is my latest Attempt: [autorun] icon=data/favicon.ico label=My Project open=ShellRun.exe OPEN-ME.htm This was another attempt: [autorun] icon=data/favicon.ico label=My Project shellexecute=OPEN-ME.html shell\openme=Learn More About My Project shell\openme\command=OPEN-ME.html shell=openme Some of this is working, like the icon and the label. Just not the auto run.

    Read the article

  • Doctrine2 Mutliple DBs without Symfony2?

    - by ehime
    Hey guys I know that using the Doctrinebundle in Symfony2 it is possible to instantiate multiple DB connections under Doctrine... $connectionFactory = $this->container->get('doctrine.dbal.connection_factory'); $connection = $connectionFactory->createConnection(array( 'driver' => 'pdo_mysql', 'user' => 'foo_user', 'password' => 'foo_pass', 'host' => 'foo_host', 'dbname' => 'foo_db', )); I'm curious if this is the case if you are using PURELY Doctrine though?, I've set up Doctrine via Composer like so... { "config": { "vendor-dir": "lib/" }, "require": { "doctrine/orm": "2.3.4", "doctrine/dbal": "2.3.4" } } And have been looking for my ConnectionFactory class but am not seeing it anywhere? Am I required to use Symfony2 to do this? Thanks!

    Read the article

  • Building a case for solr

    - by Midhat
    Our product consists of multiple applications, All using Lucene. 2 of the applications I am involved with have Lucene indexes of about 3 GB and 12GB. Another team is building an application, for which they estimate the LUCENE INDEX size to be close to 1 Terabyte. New documents are added to the indexes every 15 days approx. We do not have any apparent performance issues with the current applications. So my question is SHould we be using Solr now? When should one stop using Lucene and graduate to Solr? Any disadvantages/problems for using Solr? The client applications are made in ASP.Net, but I assume they will be able to use a solr server using solrnet

    Read the article

  • Global find object references in NHibernate

    - by Miral
    Is it possible to perform a global reversed-find on NHibernate-managed objects? Specifically, I have a persistent class called "Io". There are a huge number of fields across multiple tables which can potentially contain an object of that type. Is there a way (given a specific instance of an Io object), to retrieve a list of objects (of any type) that actually do reference that specific object? (Bonus points if it can identify which specific fields actually contain the reference, but that's not critical.) Since the NHibernate mappings define all the links (and the underlying database has corresponding foreign key links), there ought to be some way to do it.

    Read the article

  • CodePlex Daily Summary for Monday, July 09, 2012

    CodePlex Daily Summary for Monday, July 09, 2012Popular ReleasesDbDiff: Database Diff and Database Scripting: 1.1.3.3: Sql 2005, Sql 2012 fixes Removed dbdiff recommended default exe because it was a wrong build.re-linq: 1.13.158: This is build 1.13.158 of re-linq. Find the complete release notes for the build here: Release NotesMVVM Light Toolkit: MVVM Light Toolkit V4 RTM: The issue with the installer is fixed, sorry for the problems folks This version supports Silverlight 3, Silverlight 4, Silverlight 5, WPF 3.5 SP1, WPF4, Windows Phone 7.0 and 7.5, WinRT (Windows 8). Support for Visual Studio 2010 and Visual Studio 2012 RC.BlackJumboDog: Ver5.6.7: 2012.07.08 Ver5.6.7 (1) ????????????????「????? Request.Receve()」?????????? (2) Web???????????TaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...DotNetNuke Persian Packages: ??? ?? ???? ????? ???? 6.2.0: *????? ???? ??? ?? ???? 6.2.0 ? ??????? ???? ????? ???? ??? ????? *????? ????? ????? ??? ??? ???? ??? ??????? ??????? - ???? *?????? ???? ??? ?????? ?? ???? ???? ????? ? ?? ??? ?? ???? ???? ?? *????? ????? ????? ????? ????? / ??????? ???? ?? ???? ??? ??? - ???? *???? ???? ???? ????? ? ??????? ??? ??? ??? ?? ???? *????? ????? ???????? ??? ? ??????? ?? ?? ?????? ????? ????????? ????? ?????? - ???? *????? ????? ?????? ????? ?? ???? ?? ?? ?? ???????? ????? ????? ????????? ????? ?????? *???? ?...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...40FINGERS DotNetNuke StyleHelper: 40FINGERS StyleHelper Skin Object 02.06.00: Version 02.06.00 Beta If you upgrade to this version and have used redirecting in your existing skins, make sure to test them! Changes to Redirect: Intro: Redirect Logic changes The way redirect were handled in a previous version was quite basic. I mostly used it to redirect Mobile devices fromthe Home page of a regular website to a mobile site I came a cross a more complex problem for a client were certain page should be redirected to the matching mobile pages. To suppor...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...IronPython: 2.7.3: On behalf of the IronPython team, I'm happy to announce the final release of IronPython 2.7.3. This release includes everything from IronPython 54498, 62475, and 74478 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. The incompatibility with IronRuby has been resolved, and they can once again be installed side-by-side. The biggest improvements in IronPython 2.7.3 are: the...Media Assistant: Media Assistant 3.0 Alpha: Migrate Database from SQL Compact Framework to SQLite database.Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...Office Integration Pack: Office Integration Pack 1.03: Version 1.03 add a couple of new capabilities including: Support for exporting images to Word content controls and tables The ability to import all data from Sheet1 with a column mapping, without specifying a range or worksheet.Enterprise Library 5 Caching with ProtoBuf.NET: Initial Release: Initial Release of implementationSharePoint Common Framework: SharePointCommon 1.1: - added lazy loading of any entity property - improved performance on big listsNew ProjectsAdf: wewfAttilaxToPinyin: ???????????? pinyinConvert.v20120709 。。?????????????,????,???,????. ---------------??------------ ??????,???Facebook???????????Twitter?????。????,Twitter????Build Time: Este es un proyecto software para la realización de horarios entres Profesores - Aulas - Cursos.ChainOfResponsibility: Exemple en C# d'implémentation du pattern Chain of ResponsibilityDauphine SmartControls for K2 blackpearl: Dauphine SmartControls for K2 blackpearl simplifies the integration of K2 blackpearl processes with ASP.NET web forms.Excel add-in for range databases: Range database for Excel.OhHai Browser: OhHai Browser is a opensource web browser running webkit.net OhHai Browser has just been released after a year of beta testing PHP File System: A PHP class library to facilitate working with file system.QScanLibrary: A simple anti-malware libraryRGraphWebApi: This is a project about the MVC4 Web Api Service Layers for the RGraph javascript charts library.Rhomble: RhombleShipping.ByTotal plugin for nopCommerce: This nopCommerce plugin is a shipping rate computation method that provides preconfigured shipping rates based on an order's subtotal.SideWinder Strategic Commander Revival: Provides an editor for the "Microsoft SideWinder Strategic Commander" that is compatible with the latest Windows edition.Software Testing Suite PowerShell: This is a suite project embracing several individual PowerShell modules. The project provides unified reporting and test management.TPM: C# program that edit and mix Texture Packs for game Minecraft.WCF with XML Serialization: This is simple WCF Web Service created with Web Service Software Factory PatternWindows 8 Minesweeper: A minesweeper clone in the Windows 8 Metro style, developed in JavaScript and HTML5.XWin Server: The Codeplex site for XWin Server/GUI??Nop???: ??Nop???,??Nop??

    Read the article

  • How do I create a reusable WF sequential workflow?

    - by djgiard
    I have two customers that have the same workflow (Create file -transport file - wait for response - send response to internal team); however the implementation of each step is different for each customer. For example, one customer requires a flat file to be sent via SFTP, while the other customer requires an XML file to be sent via FTP. I'd like to create a sequential workflow, using Microsoft Workflow Foundation (WF) and reuse this workflow for multiple vendors. Each action's call to an external module can use the same interface, but a different concrete implementation. However, I'm unfamiliar with WF and I'm not sure how to implement this. Can someone point me to the proper way to use this pattern? Will it make a difference whether I choose WF 3.5 or WF 4.0? Thank you.

    Read the article

  • How to prevent popups when loading a keystore

    - by Newtopian
    Hi as corollary to this question I wanted to ask if you know how to prevent the poping of dialogue either to ask for password or to ask to insert a certificate. We are currently building a system where we have to use the windows keystore to get certificates that are stored on USB token containing both reader and certificate. Unlike the original question we do not experience problems when loading the keystore but when we are accessing it. If there is only a single certificate in the keystore no problem, we get the appropriate password popup at the appropriate time and that's it. However if a second USB key gets inserted in the system and later removed the entry remains in the keystore and from then-on every time we try to access information in the keystore we get a popup to insert the key. This occurs for every certificates in the store for which the key is not currently connected to the computer. The system we are interfacing with that requires these certificates necessitates that we perform multiple cryptographic operations and to have these popups to come up every times is rather annoying to say the least.

    Read the article

  • Oracle Fusion Middleware gives you Choice and Portability for Public and Private Cloud

    - by Michelle Kimihira
    Author: Margaret Lee, Senior Director, Product Management, Oracle Fusion Middleware Cloud Computing allows customers to quickly develop and deploy applications in a shared environment.  The environment can span across hardward (IaaS), foundation layer software (PaaS), and end-user software (SaaS). Cloud Computing provides compelling benefits in terms of business agility and IT cost savings.  However, with complex, existing heterogeneous architectures, and concerns for security and manageability, enterprises are challenged to define their Cloud strategy.  For most enterprises, the solution is a hybrid of private and public cloud.  Fusion Middleware supports customers’ Cloud requirements through choice and portability. Fusion Middleware supports a variety of cloud development and deployment models:  Oracle [Public] Cloud; customer private cloud; hybrid of these two, and traditional dedicated, on-premise model Customers can develop applications in any of these models and deployed in another, providing the flexibility and portability they need Oracle Cloud is a public cloud offering.  Within Oracle Cloud, Fusion Middleware provides two key offerings include the Developer cloud service and Java cloud deployment service. Developer Cloud Service Simplify Development: Automated provisioned environment; pre-configured and integrated; web-based administration Deploy Automatically: Fully integrated with Oracle Cloud for Java deployment; workflow ensures build & test Collaborate & Manage: Fits any size team; integrated team source repository; continuous integration; task/defect tracking Integrated with all major IDEs: Oracle JDeveloper; NetBeans; Eclipse Java Cloud Service Java Cloud service provides flexible Java deployment environment for departmental applications and development, staging, QA, training, and demo environments.  It also supports customizations deployments for SaaS-based Fusion Applications customers.  Some key features of Java Cloud Service include: WebLogic Server on Exalogic, secure, highly available infrastructure Database Service & IDE Integration Open, Standard-based Deploy Web Apps, Web Services, REST Services Fully managed and supported by Oracle For more information, please visit Oracle Cloud, Oracle Cloud Java Service and Oracle Cloud Developer Service. If your enterprise prefers a private cloud, for reasons such as security, control, manageability, and complex integration that prevent your applications from being deployed on a public cloud, Fusion Middleware also provide you with the products and tools you need.  Sometimes called Private PaaS, private clouds have their predecessors in shared-services arrangements many large companies have been building in the past decade.  The difference, however, are in the scope of the services, and depth of their capabilities.  In terms of vertical stack depth, private clouds not only provide hardware and software infrastructure to run your applications, they also provide services such as integration and security, that your applications need.  Horizontally, private clouds provide monitoring, management, lifecycle, and charge back capabilities out-of-box that shared-services platforms did not have before. Oracle Fusion Middleware includes the complete stack of hardware and software for you to build private clouds: SOA suite and BPM suite to support systems integration and process flow between applications deployed on your private cloud and the rest of your organization Identity and Access Management suite to provide security, provisioning, and access services for applications deployed on your private cloud WebLogic Server to run your applications Enterprise Manager's Cloud Management pack to monitor, manage, upgrade applications running on your private cloud Exalogic or optimized Oracle-Sun hardware to build out your private cloud The most important key differentiator for Oracle's cloud solutions is portability, between private and public clouds.  This is unique to Oracle because portability requires the vendor to have product depth and breadth in both public cloud services and private cloud product offerings.  Most public cloud vendors cannot provide the infrastructure and tools customers need to build their own private clouds.  In reverse, traditional software tools vendors typically do not have the product and expertise breadth to build out and offer a public cloud.  Oracle can.  It is important for customers that the products and technologies  Oracle uses to build its public is the same set that it sells to customers for them to build private clouds.  Fundamentally, that enables skills reuse,  as well as application portability. For more information on Oracle PaaS offerings, please visit Oracle's product information page.    Resources Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

< Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >