Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 893/3203 | < Previous Page | 889 890 891 892 893 894 895 896 897 898 899 900  | Next Page >

  • Simple multi-seat

    - by Oli
    I've asked about multiseat before. The answer (for 10.04) involved doing it the proper way (eg through gdm, multiple server layouts). The problem was that gdm needs to be patched or reverted to 2.20 for multiseat. It's an ugly hack that, worse than anything, will hold up future updates. As a result, I didn't do anything. I still have a spare video card. I still have the monitor, keyboard and mouse all sitting waiting to jump into action. And I still want to be able to turn that into a simple desktop. My needs don't seem complicated. I have a second video card, a USB hub and anything connected to that USB hub that I want to be dedicated to another X server. I don't need a login screen (I'm happy hard-coding in a auto-login and I'd be happy with the user starting the X server if that's possible). This is so simple in my head that I only need two questions: How can I explicitly start an X server from the command line on an unused video adapter (by passing it whatever configuration I need to)? Can I have this new X session load a desktop environment on load? This seems like something you should be able to write in a little upstart script within 10 minutes. That would be perfect for me as then I'd have a nice start/stop control over the secondary desktop from the main desktop (that I want to leave unscathed!) I'm thinking something as simple as this for the payload: su -u other_user -c "startx -- localhost hardware-information" And use .xinitrc to load openbox or something...

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • [EF + ORACLE] Updating and Deleting Entities

    - by JTorrecilla
    Prologue In previous chapters we have seen how to insert data through EF, with and without sequences. In this one, we are going to see how to Update and delete Data from the DB. Updating data The update of the Entity Data (properties) is a very common and easy action. Before of change any of the properties of the Entity, we can check the EntityState property, and we can see that is EntityState.Unchanged.   For making an update it is needed to get the Entity which will be modified. In the following example, I use the GetEmployeeByNumber to get a valid Entity: 1: EMPLEADOS emp=GetEmployeeByNumber(2); 2: emp.Name="a"; 3: emp.Phone="2"; 4: emp.Mail="aa"; After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. 1: context.SaveChanges(); After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. If we check again the EntityState property we will see that the value will be EntityState.Unchanged.   Deleting Data Another easy action is to delete an Entity.   The first step to delete an Entity from the DB is to select the entity: 1: CLIENTS selectedClient = GetClientByNumber(15); 2: context.CLIENTES.DeleteObject(clienteSeleccionado); Before invoking the DeleteObject function, we will check EntityStet which value must be EntityState.Unchanged. After deleting the object, the state will be changed to EntitySate.Deleted. To commit the action we have to invoke the SaveChanges function. Aftar that, the EntityState property will be EntityState.Detached. Cascade Entity Framework lets cascade updates and deletes, although I never see cascade updates. What is a cascade delete? A cascade delete is an action that allows to delete all the related object to the object we desire to delete. This option could be established in the DB manager, or it could be in the EF model designer. For example: With a given relation (1-N) between clients and requests. The common situation must be to let delete those clients whose have no requests. If we select the relation between both entities, and press the second mouse button, we can see the properties panel of the relation. The props are: This grid shows the relations indicating the Master table(Clients) and the end point (Cabecera or Requests) The property “End 1 OnDelete” indicates the action to do when a Entity from the Master will be deleted. There are two options: - None: No action will be done, it is said, if a Entity has details entities it could not be deleted. - Cascade: It will delete all related entities to the master Entity. If we enable the cascade delete in a relation, and we invoke the DeleteObject function of the set, we could observe that all the related object indicates a Entitystate.Deleted state. Like an update, insert or common delete, until we commit the changes with SaveChanges function, the data would not be commited. Si habilitamos el borrado en cascada de una relación, e invocamos a la función DeleteObject del conjunto, podremos observar que todas las entidades de Detalle (de la relación indicada) presentan el valor EntityState.Deleted en la propiedad EntityState. Del mismo modo que en el borrado, inserción o actualización, hasta que no se invoque al método SaveChanges, los cambios no van a ser confirmados en la Base de Datos. Finally In this chapter we have seen how to update a Entity, how to delete an Entity and how to implement Cascade Deleting through EF. In next chapters we will see how to query the DB data.

    Read the article

  • invitation: Oracle Endeca Information Discovery Bootcamp

    - by mseika
    The Oracle Endeca Information Discovery (OEID) Boot Camp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method. Training is in EnglishWhat will be covered?The Oracle Endeca Information Discovery (OEID) Boot Camp is a three-day class with a combination of lecture and hands-on exercises, tailored to make participants aware of the Oracle Endeca Information Discovery platform, and to gain valuable skills for the implementation of projects.The course will follow a combination of lectures and hands-on lab sessions, to allow participants to apply the knowledge they have gained by extracting from sample data sources, and creating an end-user application that will be used to answer several business questions. What You Will Learn Architecture: OEID Components, use of graphs, overview of clustering OEID Installation: Architecture planning, infrastructure requirements, installation process, production hints & tips OEID Administration: Data store management, administrative operations, portal configuration, data sources, system monitoring Indexing: Integration Suite, Data source analysis, Graph (ETL) creation, record design techniques Portlets: Studio portlets, custom portlet development, querying functions Reporting: Studio applications & best practices, visualizations, EQL PrerequisitesYou must bring a laptop with you for the Hands-on labs ENVIRONMENT – LAPTOP REQUIREMENTS For the OEID boot camp, participants will perform the hands-on lab exercises using a virtual machine image. These virtual machines will be provided to participants within a cloud environment, requiring participants to bring a laptop to the Boot Camp that can access a Windows server utilizing Microsoft RDP from their laptop. Participants will not need to install any software onto their laptops, but must ensure that they have the proper software installed for their OS, to connect through RDP to a server. HARDWARE • CPU: Dual-core, x64, 1.8Ghz or higher • RAM: 2GB SOFTWARE • Microsoft Remote Desktop Client • Internet Explorer 7, Firefox, or Google Chrome This boot camp is intended for prospective implementers of Oracle Endeca Information Discovery (OEID), or those in a presales role looking to gain insight into the technical benefits of this new package. Attendees should have experience and familiarity with the basic concepts of business intelligence. Where and When ? Monday, October 15th until wednesday, October 17th included 9:00 - 18:00 Oracle France 15, boulevard Charles de Gaulle 92715 Colombes Access Register Here Limited number of seats !

    Read the article

  • Live Webcast: Private Cloud Database Consolidation with Oracle Exadata

    - by kimberly.billings
    Thursday, January 20th, 2011 at 9:00 am PT In this webcast, you'll learn how Oracle Exadata, Oracle Database 11g, and Oracle Real Application Clusters enable you to consolidate multiple applications on clustered server and storage pools to achieve extreme performance and lower your IT costs. You'll also learn how to maximize the efficiencies of private clouds, including: • Multitenancy • Rapid provisioning • Pay-for-use infrastructure Join us for this live Webcast and discover how Oracle Exadata delivers key cloud capabilities, providing elastic database services that can be quickly provisioned on demand. Register today! To learn more about how customers are consolidating on private clouds with Exadata, watch this video about how Commonwealth Bank of Australia consolidated multiple database services, including OLTP applications such as PeopleSoft Financials, onto an Exadata platform for improved performance and resilience and faster time-to-market.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Why did this command make my Ubuntu more awesome than before?

    - by the_misfit
    After running Ubuntu for months and liking it, I for the first time wanted to use headphones and was trying to get them to work so I ran the code in step 1 https://help.ubuntu.com/community/SoundTroubleshootingProcedure -- the result was an improvement in performance, resolution, and audio.. a vast improvement. What does that suggest about what was wrong with my settings before if running this improved system performance, graphics, and my audio problem? The command is: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa; sudo apt-get update;sudo apt-get dist-upgrade; sudo apt-get install linux-sound-base alsa-base alsa-utils gdm ubuntu-desktop linux-image-`uname -r` libasound2; sudo apt-get -y --reinstall install linux-sound-base alsa-base alsa-utils gdm ubuntu-desktop linux-image-`uname -r` libasound2; killall pulseaudio; rm -r ~/.pulse*; sudo usermod -aG `cat /etc/group | grep -e '^pulse:' -e '^audio:' -e '^pulse-access:' -e '^pulse-rt:' -e '^video:' | awk -F: '{print $1}' | tr '\n' ',' | sed 's:,$::g'` `whoami`

    Read the article

  • OSB, Service Callouts and OQL - Part 1

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This post will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. Please refer to the blog post for more details. 

    Read the article

  • Need some help implementing VBO's with Frustum Culling

    - by Isracg
    i'm currently developing my first 3D game for a school project, the game world is completely inspired by minecraft (world completely made out of cubes). I'm currently seeking to improve the performance trying to implement vertex buffer objects but i'm stuck, i already have this methods implemented: Frustum culling, only drawing exposed faces and distance culling but i have the following doubts: I currently have about 2^24 cubes in my world, divided in 1024 chunks of 16*16*64 cubes, right now i'm doing immediate mode rendering, which works well with frustum culling, if i implement one VBO per chunk, do i have to update that VBO each time i move the camera (to update the frustum)? is there a performance hit with this? Can i dynamically change the size of each VBO? of do i have to make each one the biggest possible size (the chunk completely filled with objects)?. Would i have to keep each visited chunk in memory or could i efficiently remove that VBO and recreated it when needed?.

    Read the article

  • How does datomic handle "corrections"?

    - by blueberryfields
    tl;dr Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of "was a True on Tuesday at 12:00pm?" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?

    Read the article

  • Arçelik A.S. Uses Advanced Analytics to Improve Product Development

    - by Sylvie MacKenzie, PMP
    "Oracle’s Primavera P6 Enterprise Project Portfolio Management’s advanced analytics gives us better insight into the product development process by helping us to identify potential roadblocks.” – Iffet Iyigun Meydanli, Innovation and System Development Manager, R&D Center, Arçelik A.S. Founded in 1955, Arçelik A.S. is now the leading household appliance manufacturer in Turkey, and the third-largest household appliance company in Europe. It operates 14 production facilities in five countries (Turkey, Romania, Russia, China, and South Africa), with international sales and marketing offices in 20 countries. Additionally, the company manages 10 brands (Arçelik, Beko, Grundig, Blomberg, Elektrabregenz, Arctic, Leisure, Flavel, Defy, and Altus). The company has a household presence in more than 100 countries, including China and the United States. Arçelik’s Beko brand is among the top-10 household appliance brands in world, as a market leader for refrigerators, freezers, and washing machines in the United Kingdom. Arçelik implemented Oracle’s Primavera P6 Enterprise Project Portfolio Management for improved management of its design and manufacturing projects. With the solution, Arelik has improved its research and development (R&D) with the ability to evaluate technology risks when planning its projects. Also, it is now more easy to make plans for several locations, monitor all resources, and plan for future projects.  Challenges Improve monitoring of R&D resources?including human resources and critical laboratory equipment?to optimize management of the company’s R&D project portfolio Establish a transparent project platform to enable better product and process planning, gain insight into product performance, and facilitate advanced analytics that support R&D and overall business decisions Identify potential roadblocks for better risk management Solutions Worked with Oracle Partner PRM to implement Oracle’s Primavera P6 Enterprise Project Portfolio Management to manage the entire household-appliance, R&D project portfolio lifecycle, enabling managers and project leaders to better track and monitor resources and deliverables in real time Improved risk analysis and evaluation abilities for R&D projects Supported long-term planning needs Used advanced reporting features to capture data needed for budgeting and other project details, including employee performance evaluations Improved monitoring abilities and insight into the overall performance of products postproduction Enabled flexible, fast, and customized reporting with the P6 dashboard on a centralized platform to meet custom reporting needs for project leaders and support on-time and on-budget deliverables Integrated with other corporate departments, such as accounts payable, to upload project invoice data into the Primavera solution and the company’s e-mail system, so that project leaders will be alerted about milestones and other project related information Partner“Oracle Partner PRM provided us with a quick, reliable, and solution-focused approach to its support,” said Iffet Iyigun Meydanli, innovation and system development manager, R&D Center, Arçelik A.S. “The company’s service covered the entire spectrum of our needs, including implementation, training, configuration, problem solving, and integration.”

    Read the article

  • Crystal Reports: 3 New Uses For Sub Reports

    I hate sub reports and always consider them the last resort in any reporting solution. The negative effect on performance and maintainability is just not worth the easy ride they give the report writer. Nine times out of ten reporting requirements can be met using a little forethought and planning (and a solid understanding of formulas). With that said, there are a few novel ways of using sub reports which will not affect performance and actually prove a boon to the developer.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I find information on who links to my sites?

    - by bobdobbs
    I'm trying to figure out if there's a free way to get information on backlinks to my site. I've had webmaster tools and google analytics set up for years. But I can't find access to data about site backlinks in either toolset. Webmaster tools, under 'traffic'-'links to your site' gives me the same message for all of my sites: "No data available". I haven't been able to find anything in GA that gives any information on backlinks. I've heard of using "links:" as an operator in google search, but for each of my sites, this returns either zero or very few results in cases when I know I have many backlinks. Most of the links simple aren't shown. My thinking is that google maintains a graph of who links to my site, so I figured that they might let me see it. But I can't figure out how. I've found this tool on a spammy website: http://www.backlinkwatch.com. It offers more data than google on my backlines, and offers more results in exchange for a paid subscription. The data it offers for free looks good, but the results are limited and the site has popups and obnoxious ads. So, in short: how do I get data on who links to me? Is there a free way?

    Read the article

  • Crystal Reports: 3 New Uses For Sub Reports

    I hate sub reports and always consider them the last resort in any reporting solution. The negative effect on performance and maintainability is just not worth the easy ride they give the report writer. Nine times out of ten reporting requirements can be met using a little forethought and planning (and a solid understanding of formulas). With that said, there are a few novel ways of using sub reports which will not affect performance and actually prove a boon to the developer.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Saturday #294 - Philadelphia

    SQL Saturday is coming to Philadelphia on June 7, 2014. This event is a free day of training and networking for SQL Server Professionals, organized by the Philadelphia SQL Server User Group. The event also features two paid-for Precons, one presented by Allan Hirt and the other presented jointly by Joseph D'Antoni and Stacia Misner. Register while space is available. FREE eBook – "45 Database Performance Tips for Developers"Improve your database performance with 45 tips from SQL Server MVPs and industry experts. Get the eBook here.

    Read the article

  • Automation Question using VMWare Workstation

    - by James K
    I'm running an experiment that requires me to create 100 instances of Windows XP w/SP3 and saving each VM instance off to a hard drive. I have to annotate the time that the VM load starts (starting my timer when I see the "Setup is preparing...") until the load ends when I see the final desktop after VM loads its drivers. I also have to annotate the host start and stop time. Is there any way this process can be automated? Each load runs me about 16:00 minutes and gets real tiresome after a time. BTW... Exact timing is not necessary, eyeballing as described above is sufficient for my testing needs.

    Read the article

  • New Exadata and Exalogic public references

    - by Javier Puerta
    The following are new public references for Exadata and Exalogic: Allegis Accelerates HR Processing for 130,000 Contractors  Oracle customer, Allegis, describes how Oracle Exadata and Oracle Exalogic helped consolidate and optimize critical processes running in Oracle's PeopleSoft.  Hyundai Motor Company Document Cuts Repository Management and Access Times Approximately 85%, Saves More Than US$1 Million in Yearly Printing and Paper Costs The company implemented Oracle Exalogic Elastic Cloud, Oracle Exadata Database Machine, Oracle WebLogic, and Oracle WebCenter Content 11g to ensure high performance and stability for its new document-centralization system  University of Minnesota Reduces Data Center Footprint while Enhancing Performance and Manageability with Oracle Exadata Database Machine   Leading Research Institution Consolidates More Than 200 Databases to Approximately 20 while Maximizing Availability for Thousands of Users SThree Prepares to Triple in Size with a Cloud-Based Architecture and a Consolidated, Stable, and Scalable Global Platform  By consolidating 68 databases into a single Oracle Exadata Database Machine, SThree achieved the stability and scalability it needed to support its growth targets. Further enhancements to the organization’s core systems include a planned upgrade for Siebel Contact Center and improved integration with Oracle Fusion Middleware.

    Read the article

  • Live Updates in PrimeFaces Line Chart

    - by Geertjan
    In the Facelets file: <p:layoutUnit position="center"> <h:form> <p:poll interval="3" update=":chartPanel" autoStart="true" /> </h:form> <p:panelGrid columns="1" id="chartPanel"> <p:lineChart xaxisLabel="Time" yaxisLabel="Position" value="#{chartController.linearModel}" legendPosition="nw" animate="true" style="height:400px;width: 1000px;"/> </p:panelGrid> </p:layoutUnit> The controler: import java.io.Serializable; import javax.inject.Named; import org.primefaces.model.chart.CartesianChartModel; import org.primefaces.model.chart.ChartSeries; @Named public class ChartController implements Serializable { private CartesianChartModel model; private ChartSeries data; public ChartController() { createLinearModel(); } private void createLinearModel() { model = new CartesianChartModel(); model.addSeries(getStockChartData("Stock Chart")); } private ChartSeries getStockChartData(String label) { data = new ChartSeries(); data.setLabel(label); for (int i = 1; i <= 20; i++) { data.getData().put(i, (int) (Math.random() * 1000)); } return data; } public CartesianChartModel getLinearModel() { return model; } } Based on this sample.

    Read the article

  • What is a good design pattern / lib for iOS 5 to synchronize with a web service?

    - by Junto
    We are developing an iOS application that needs to synchronize with a remote server using web services. The existing web services have an "operations" style rather than REST (implemented in WCF but exposing JSON HTTP endpoints). We are unsure of how to structure the web services to best fit with iOS and would love some advice. We are also interested in how to manage the synchronization process within iOS. Without going into detailed specifics, the application allows the user to estimate repair costs at a remote site. These costs are broken down by room and item. If the user has an internet connection this data can be sent back to the server. Multiple photographs can be taken of each item, but they will be held in a separate queue, which sends when the connection is optimal (ideally wifi). Our backend application controls the unique ids for each room and item. Thus, each time we send these costs to the server, the server echoes the central database ids back, thus, that they can be synchronized in the mobile app. I have simplified this a little, since the operations contract is actually much larger, but I just want to illustrate the basic requirements without complicating matters. Firstly, the web service architecture: We currently have two operations: GetCosts and UpdateCosts. My assumption is that if we used a strict REST architecture we would need to break our single web service operations into multiple smaller services. This would make the services much more chatty and we would also have to guarantee a delivery order from the app. For example, we need to make sure that containing rooms are added before the item. Although this seems much more RESTful, our perception is that these extra calls are expensive connections (security checks, database calls, etc). Does the type of web api (operation over service focus) determine chunky vs chatty? Since this is mobile (3G), are we better handling lots of smaller messages, or a few large ones? Secondly, the iOS side. What is the current advice on how to manage data synchronization within the iOS (5) app itself. We need multiple queues and we need to guarantee delivery order in each queue (and technically, ordering between queues). The server needs to control unique ids and other properties and echo them back to the application. The application then needs to update an internal database and when re-updating, make sure the correct ids are available in the update message (essentially multiple inserts and updates in one call). Our backend has a ton of business logic operating on these cost estimates. We don't want any of this in the app itself. Currently the iOS app sends the cost data, and then the server echoes that data back with populated ids (and other data). The existing cost data is deleted and the echoed response data is added to the client database on the device. This is causing us problems, because any photos might not have been sent, but the original entity tree has been removed and replaced. Obviously updating the costs tree rather than replacing it would remove this problem, but I'm not sure if there are any nice xcode libraries out there to do such things. I welcome any advice you might have.

    Read the article

  • Delete blank row in dropdownlist or select default value in infopath dropdown

    - by KunaalKapoor
    Regular Dropdown (Pulling from DataSource)1. Double click on dropdown field in the data source.2. Select Fx button for Default value.3. Select Insert field or group.4. Select secondary xml from data source.5. Select “value” and click on ok.For a cascading dropdown:You have to add the rule and follow these steps,1. Rules -> ‘Add’ - > ‘Add Action’.2. Select ‘Set a field value’ option in first dropdown in Action.3. Select your field with help of ‘Select a Field or Group’ option in ‘Field’.4. Select your external data source list value in ‘Value’.This rule you can apply in OnLoad or whenever you will get external data source values.

    Read the article

  • Event sourcing and persistence

    - by jgauffin
    I'm reading up on event sourcing and have a question regarding persistence. I can still have a DB with all entities, right? Or should the events be replayed every time the application is started to get the latest version of each entity in the memory? Seems like a waste on larger systems (as in large amount of data)? The point with event sourcing is that I can can replay the events to populate a data store if required? (or analyze the data)

    Read the article

  • How much configurability to give to users regarding concurrency?

    - by rwong
    This question is a narrowing-down of these related questions: How much effort should we spend to programming for multiple cores? Concurrency: How do you approach the design and debug the implementation? Given that each user's computers may have different performance characteristics with respect to calculations, memory, disk I/O bandwidth and network I/O bandwidth, and that it is difficult to implement an automated self-tuning system in your software, how much configurability should we give to the end-users so that they can find ways (by trial-and-error?) to improve our software's efficiency? If we give users the ability to change these settings, how do we give visual feedback to users so they can measure the performance changes?

    Read the article

  • Session persistence between multiple Rails / Unicorn servers with Redis as session_store on AWS

    - by d_ethier
    I've got 2 nginx EC2 instances pointing to 2 Unicorn EC2 instances in a round robin load balanced configuration. The two nginx instances are being the Elastic Load Balancer. Both Unicorn instances have a Redis session_store configured which is in a master/slave configuration with an Elastic IP attached to the master. I've tried configuring the session stickiness on the load balancer, but sessions are lost on each page refresh. I'm using the redis-store gem for the session_store configuration and redis support. Anyone have any ideas as to why this is not working?

    Read the article

< Previous Page | 889 890 891 892 893 894 895 896 897 898 899 900  | Next Page >