Search Results

Search found 58436 results on 2338 pages for 'multipartform data'.

Page 851/2338 | < Previous Page | 847 848 849 850 851 852 853 854 855 856 857 858  | Next Page >

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • Game Design - When to separate out pieces into static libraries?

    - by Jason
    I am developing a game that has a lot of platform generic pieces. I am wanting to separate out various pieces into static libraries and I would like to know what other devs do. I am considering targeting other platforms and I want to maintain an much platform neutrality as I can. I have a lot of generic level data in C++ classes. THinking all of the level data could go into a single static library. I have a lot of generic OpenGL code that I think could also go into a single static library. I am already using CMAKE for some and XCode 4.5 for the Apple specific pieces. What do other devs do to stay platform neutral? Does anyone use Eclipse instead of XCode and Visual Studio on Windows?

    Read the article

  • How to interpret Google's "Avg. Page Load Time"?

    - by hawbsl
    Is there any industry rule of thumb for what's considered an unacceptable load time v. an OK one v. a blistering fast one? We're just reviewing some Google Analytics data and getting 0.74 Avg. Page Load Time reported. I guess that's OK. However it would be good if some meatier comparison data were available, or a blog post, or somewhere where there's some analysis of what speeds are generally being achieved by various kinds of sites. Any useful links to help someone interpret these speeds? If you Google it you just get a lot of results dealing with how to improve your speed. We're not at that stage yet.

    Read the article

  • Pattern for loading and handling resources

    - by Enoon
    Many times there is the need to load external resources into the program, may they be graphics, audio samples or text strings. Is there a patten for handling the loading and the handling of such resources? For example: should I have a class that loads all the data and then call it everytime I need the data? As in: GraphicsHandler.instance().loadAllData() ...//and then later: draw(x,y, GraphicsHandler.instance().getData(WATER_IMAGE)) //or maybe draw(x,y, GraphicsHandler.instance().WATER_IMAGE) Or should I assign each resource to the class where it belongs? As in (for example, in a game): Graphics g = GraphicsLoader.load(CHAR01); Character c = new Character(..., g); ... c.draw(); Generally speaking which of these two is the more robust solution? GraphicsHandler.instance().getData(WATER_IMAGE) //or GraphicsHandler.instance().WATER_IMAGE //a constant reference

    Read the article

  • Register to the Solaris 11.1 and Solaris Cluster webcast!

    - by Karoly Vegh
    On the 7. November there will be a live webcast about Oracle Solaris 11.1 and Oracle Solaris Cluster that you do not want to miss: the Online Launch Event: Oracle Solaris 11 - Innovations for your Data Center.  This live webcast will have three sessions: Executive Keynote: Oracle Solaris 11 - Innovations for your data center  Oracle Technical Session: Oracle Solaris 11.1  Oracle Technical Session: Oracle Solaris Cluster  There will be a live Q&A session, but feel free to tweet as well with #solaris.  see you there! -- charlie  

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • The Minimalist Approach to Content Governance - Manage Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. Most people would probably agree that creating content is the enjoyable part of the content life cycle. Management, on the other hand, is generally not. This is why we thankfully have an opportunity to leverage meta data, security and other settings that have been applied or inherited in the prior parts of our governance process. In the interests of keeping this process pragmatic, there is little day to day activity that needs to happen here. Most of the activity that happens post creation will occur in the final "Retire" phase in which content may be archived or removed. The Manage Phase will focus on updating content and the meta data associated with it - specifically around ownership. Often times the largest issues with content ownership occur when a content creator leaves and organization or changes roles within an organization.   1. Update Content Ownership (as needed and on a quarterly basis) Why - Without updating content to reflect ownership changes it will be impossible to continue to meaningfully govern the content. How - Run reports against the meta data (creator and department to which the creator belongs) on content items for a particular user that may have left the organization or changed job roles. If the content is without and owner connect with the department marked as responsible. The content's ownership should be reassigned or if the content is no longer needed by that department for some reason it can be archived and or deleted. With a minimal investment it is possible to create reports that use an LDAP or Active Directory system to contrast all noted content owners versus the users that exist in the directory. The delta will indicate which content will need new ownership assigned. Impact - This implicitly keeps your repository and search collection clean. A very large percent of content that ends up no longer being useful to an organization falls into this category. This management phase (automated if possible) should be completed every quarter or as needed. The impact of actually following through with this phase is substantial and will provide end users with a better experience when browsing and searching for content.

    Read the article

  • Setting up fastcgi on an Ubunutu server (socket file permissions issue)

    - by gray alien
    I am trying to set up mod_fcgid on my server. Part of the requirement is that Apache needs to create a socket file for mod_fcgid. I specified the folder for Apache to write the socket data to: /var/run/apache2/fcgid I then specified this file in my fcgid.conf file as follows: SocketPath /var/run/apache2/fcgid/sock I then changed the owner of the folder to www-data (the apache user) and gave the owner full permissions to the folder and its contents. I was able to run my test fcgi app then. When I rebooted the machine, y fastcgi app no longer worked. After some investigation, I found that ownership of /var/run/apache2/fcgid has been reset to root, and with permission reset to 700 I have the following questions: Is there something specific about the /var/run folder? why is the permissions being reset after a reboot? Should I move my socket file to another location (in case root automatically takes ownership of contents in this folder for security reasons?) I am running Ubuntu 10.0.4 LTS 64 bit

    Read the article

  • YouSendIt Alternative?

    - by user4855
    Looking for a reasonably priced alternative to YouSendIt's exorbitant pricing for an embedded, unbranded (i.e. no "Uploads by SomeCompany" or at the very least, discrete, subtle co-branding) file upload solution for my client's print shop Website. To do what we want to do with YouSendIt, we're looking at a corporate account of $995 USD plus $29.99 USD monthly fee, that is only sold pro-rated, so you have to buy the entire year's worth. To me, this is just unacceptable considering the commodity pricing of storage and bandwidth nowadays. For data, we're looking at roughly 10MB per upload, with perhaps 250-1000 uploads per month, with transient data storage of no more than 30 days (and more than likely 1-2 business days) for a total of 10 GB transfer (upload) and 10 GB transfer (download, to the print shop) at the very max each month. Any ideas? Everything I've found through searching seems to be geared more towards personal file sharing and not for embedding into Websites. Thanks

    Read the article

  • Encrypted Ubuntu Partition HELP!

    - by user207984
    I had Zorin 7 Encrypted. I accidentally deleted important GRUB data trying to make more room in the boot folder i think? So the computer would no longer boot up. Anyways, I installed a second Linux OS on the same laptop, but I am unable to access all of my important information that is still on the encrypted partition. When i go to mount it to access everything, it asks for the encryption password... however, it tells me the password is incorrect, which I know 100% that it is the correct password. Someone please help me! Inform me how I can access this encrypted partition or restore the data so that I can boot the computer in Zorin 7 again.

    Read the article

  • Gridview - Conditional Images

    This Gridview sample shows how to, for each row, based on other data within that row, to show a different image. We do this by creating a TemplateField, and putting an ASP.Net Image control within it, called 'Image1'. Then, inside the RowDataBound event of the Gridview, we put code, which first, checks and finds the Image control in that row, and then assigning a different JPG file to the ImageURL property of that image. One other thing here, you'll notice, is that when the criteria is matched, we set the Image control's Visible property to 'True'. That's because, one extra criteria is, that if the Units In Stock is larger than 80, we set the Image control's Visible property to 'False'. Naturally, since we're checking one particular column in the Gridview for this data, we're using a Select Case statement.

    Read the article

  • Size of an image imported with FreeImage

    - by KaiserJohaan
    I'm having abit of a brainfart and I can't quite grasp what I'm doing wrong. It's quite simple, I am importing an image with FreeImage (http://freeimage.sourceforge.net/) which has a method FreeImage_GetBits that returns a pointer to the first byte of the image data. I then try to load all the data into memory using (bitsperpixel / 8) * pixelsWidth ' pixelHeight, like this: uint32_t bitsPerPixel = FreeImage_GetBPP(bitmap); // resolves to 24 uint32_t widthInPixels = FreeImage_GetWidth(bitmap); // resolves to 1024 uint32_t heightInPixels = FreeImage_GetHeight(bitmap); // resolves to 1024 // container is a std::vector<uint8_t> pkgMaterial.mTextureData.insert(pkgMaterial.mTextureData.begin(), FreeImage_GetBits(bitmap), FreeImage_GetBits(bitmap) + ((bitsPerPixel/8) * widthInPixels * heightInPixels)); I have a jpg which is 31 kilobytes in size on disc. Yet when I load it using the above formula, I see the vector is then filled with 3145728 bytes, which is approx 3145 kilobytes. What am I doing wrong?

    Read the article

  • DDD: service contains two repository

    - by tikhop
    Does it correct way to have two repository inside one service and will it be an application or domain service? Suppose I have a Passenger object that should contains Passport (government id) object. I am getting Passenger from PassengerRepository. PassengerRepository create request to server and obtain data (json) than parse received data and store inside repository. I have confused because I want to store Passport as Entity and put it to PassportRepository but all information about password contains inside json than i received above. I guess that I should create a PassengerService that will be include PassengerRepository and PassportRepository with several methods like removePassport, addPassport, getAllPassenger and etc. UPDATE: So I guess that the better way is represent Passport as VO and store all passports inside Passenger aggregate. However there is another question: Where I should put the methods (methods calls server api) for management passenger's passport. I think the better place is so within Passenger aggregate.

    Read the article

  • Best Way to Handle Meta Information in a SQL Database

    - by danielhanly.com
    I've got a database where I want to store user information and user_meta information. The reason behind setting it up in this way was because the user_meta side may change over time and I would like to do this without disrupting the master user table. If possible, I would like some advice on how to best set up this meta data table. I can either set it as below: +----+---------+----------+--------------------+ | id | user_id | key | value | +----+---------+----------+--------------------+ | 1 | 1 | email | [email protected] | | 2 | 1 | name | user name | | 3 | 1 | address | test address | ... Or, I can set it as below: +----+---------+--------------------+--------------------+--------------+ | id | user_id | email | name | address | +----+---------+--------------------+--------------------+--------------+ | 1 | 1 | [email protected] | user name | test address | Obviously, the top verison is more flexible, but the bottom version is space saving and perhaps more efficient, returning all the data as a single record. Which is the best way to go about this? Or, am I going about this completely wrong and there's another way I've not thought of?

    Read the article

  • Greepeace évalue l'impact du Cloud Computing sur l'environnement : Google et Yahoo félicités, Apple

    Greenpeace évalue l'impact du Cloud Computing sur l'environnement L'association félicite Google et Yahoo et rappelle Apple et Facebook à l'ordre GreenPeace vient de sortir son traditionnel rapport sur l'impact écologiques des nouvelles technologies. Cette fois-ci l'association militante s'est particulièrement intéressée au Cloud Computing. La consommation électrique des data-centers jugée trop élevée est montrée du doigt. Chiffres à l'appui, Greenpeace croit savoir que dans 10 ans, les infrastructures liées à "l'informatique dans les nuages" (réseaux, et data-centers donc) consommeront autant que l'Allemagne, le Brésil, le Canada et la France réunis. L'associatio...

    Read the article

  • Analyzing I/O Characteristics and Sizing Storage Systems for SQL Server Database Applications

    Understanding how to analyze the characteristics of I/O patterns in the Microsoft® SQL Server® data management software and how they relate to a physical storage configuration is useful in determining deployment requirements for any given workload. A well-performing I/O subsystem is a critical component of any SQL Server application. I/O subsystems should be sized in the same manner as other hardware components such as memory and CPU. As workloads increase it is common to increase the number of CPUs and increase the amount of memory. Increasing disk resources is often necessary to achieve the right performance, even if there is already enough capacity to hold the data. Too many SQL Servers to keep up with?Download a free trial of SQL Response to monitor your SQL Servers in just one intuitive interface."The monitoringin SQL Response is excellent." Mike Towery.

    Read the article

  • Is server validation necessary with client-side validators?

    - by peroija
    I recently created a .net web app that used over 200 custom validators on one page. I wrote code for both ClientValidationFunction and OnServerValidate which results in a ton of repetitive code. My sql statements are parameterized, I have functions that pull data from input fields and validates them before passing to the sql statements or stored procedures. And the javascript validates the fields before the page submits. So essentially the data is clean and valid before it even hits the OnServerValidate and clean after it anyways due to the aforementioned steps. This makes me question, is OnServerValidate really needed when I validate on the clientside?

    Read the article

  • Ubuntu 12.04 x64 doesn't boot anymore after a power failure

    - by Felix
    I'm a Windows user and I have no experience with Linux and Ubuntu. I installed Ubuntu 12.04 on my netbook (Asus 1215B) and everything works fine. Yesterday I ran the "update application" and updated over 120 "things" (I have no idea what exactly). After that I was asked to reboot, and I did. Ubuntu starts again and at the load screen with these 5 dots that normally begin to change color, it freezes. After 20 minutes I took out the battery to try another reboot (yes, not the the best idea), and now nothing happens. I boot from the HDD and I get an Error BOOTMGR is missing. I have important data on the hard drive. Is there an option to get this fixed? Or if not, to at least get the data from the hard drive?

    Read the article

  • How should I define my Java Objects?

    - by HonorGod
    I have a data grid where I sort of show the following information - All Guests Total Adults = 22 Total Children = 27 Confirmed Total Adults = 9 Total Children = 13 Country = Germany Total Adults = 5 Total Childres = 6 Friends Adults = 2 Children = 2 Relatives Adults = 3 Children = 4 Country = USA Total Adults = 4 Total Childres = 7 Friends Adults = 2 Children = 5 Relatives Adults = 2 Children = 2 Tentative Total Adults = 13 Total Children - 14 Country = Australia Total Adults = 7 Total Childres = 8 Friends Adults = 2 Children = 3 Relatives Adults = 5 Children = 5 Country = China Total Adults = 6 Total Childres = 6 Friends Adults = 2 Children = 4 Relatives Adults = 4 Children = 2 And in the database what I have is data at the lowest level which is Friends / Relatives and the corresponding countries set as a look-up value which in indirectly connected to another look-up that can tell me if they fall under confirmed or tentative. I guess my question is how do I layout my Java Object and perform the aggregations and give it back to the client. I am not sure if I am clear with my question, but feel free to comment so I can update the question accordingly.

    Read the article

  • usb ntfs pen drive and hard drive not working after connection to ubuntu

    - by lemon619
    as said in the title first of all i connected a usb key (ntfs formatted) with data in it, on ubuntu 12.04, after i disconnected it i was no more able to access the usb key neither from windows nor from ubuntu 12.04. The system asks me to format my usb key Just now i connected a hard drive ntfs formatted with data in it to ubuntu 12.04. I ejected correctly the hard drive and connected it to windows, and the system asks me to format my hard drive? same as the usb key Can someone help me please ? Would really appreciate!

    Read the article

  • Tracking scroll depth to reveal content engagement in Google Analytics

    This article investigates a way to track content engagement on your site. By monitoring how far down the page a visitor to your site travels and then recording the data in Google Analytics you can discover how many of your visitors are reading your content all the way to the end. Introduction When browsing through your Google Analytics reports you will find a massive selection of data at your fingertips. You can find how many people visited a page, how long they were there, what country they were...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 847 848 849 850 851 852 853 854 855 856 857 858  | Next Page >