Search Results

Search found 41025 results on 1641 pages for 'in memory database'.

Page 268/1641 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • How to represent a tree structure in NoSQL

    - by Vlad Nicula
    I'm new to NoSQL and have been playing around with a personal project on the MEAN stack (Mongo ExpressJs AngularJs NodeJs). I'm building a document editor of sorts that manages nodes of data. Each document is actually a tree. I have a CRUD api for documents, to create new trees and a CRUD api for nodes in a given document. Right now the documents are represented as a collection that holds everything, including nodes. The children parent relationship is done by ids. So the nodes are an map by id, and each node has references to what nodes are their children. I chose this "flat" approach because it is easier to get a node by id from a document. Being used to having a relation table between nodes and documents, a relation table between nodes and children nodes I find it a bit weird that I have to save the entire "nodes" map each time I update a node. Is there a better way to represent such a data type in NoSQL?

    Read the article

  • I want to install and get to building a personal MySQL DB

    - by Ari Hall
    So how do I go from installing MySQL from the Software Center to inputing data into fields and bringing in a comma delimited file? I've only had brief experience with MSAccess and OOo Base a long time ago, so details are appreciated, I just want to get up and running. I have Ubuntu 10.10, 64 bit, if that affects much. If you can link me to a howto that does exactly what I'm looking for, that would work. Again, Thanks!

    Read the article

  • Oracle Database 11g Release 2 is SAP certified for Unix and Linux platforms.

    - by jenny.gelhausen
    SAP announces certification of Oracle Database 11g Release 2 on all available UNIX and Linux platforms. This certification comes along with the immediate availability of the following important options and features: * Advanced Compression Option (table, RMAN backup, expdp, DG Network) * Real Application Testing * Oracle Database 11g Release 2 Database Vault * Oracle Database 11g Release 2 RAC * Advanced Encryption for tablespaces, RMAN backups, expdp, DG Network * Direct NFS * Deferred Segments * Online Patching All above functionality has been fully integrated within the SAP products so they can be utilized and managed from within the SAP solution stack. All required migration steps can be done fully online. Learn why Oracle is the #1 Database for Deploying SAP Applications SAP Certification announcement var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Can we decrease page swapping?

    - by Benjamin
    My system has a 5GB RAM. And my paging file size is 2GB Even though I have many RAM, page-swapping still occurs. But I don't want to that. I know how adjust the paging file size. If I resize the paging file size(ex 200MB?), Doesn't Windows System do any swapping? Are there side-effects?

    Read the article

  • Can I move the Flash temp folder/buffer (Win64)

    - by xciter
    I need to move the folder in which the Flash plugin saves its temporary files (so NOT the browser cache folder). Normally, the plugin saves its buffered content in the default temporary folder of the operating system. However, I do not want to move that folder. In other words, I only need to change the folder which flash uses. My operating system is Windows 7 64-bit. I am using the latest version of the flash plugin.

    Read the article

  • Node.js server crashes , Database operations halfway?

    - by Ranadeep
    I have a node.js app with mongodb backend going to production in a week and i have few doubts on how to handle app crashes and restart . Say i have a simple route /followUser in which i have 2 database operations /followUser ----->Update User1 Document.followers = User2 ----->Update User2 Document.followers = User1 ----->Some other mongodb(via mongoose)operation What happens if there is a server crash(due to power failure or maybe the remote mongodb server is down ) like this scenario : ----->Update User1 Document.followers = User2 SERVER CRASHED , FOREVER RESTARTS NODE What happens to these operations below ? The system is now in inconsistent state and i may have error everytime i ask for User2 followers ----->Update User2 Document.followers = User1 ----->Some other mongodb(via mongoose)operation Also please recommend good logging and restart/monitor modules for apps running in linux

    Read the article

  • Weird screeching sounds coming from computer

    - by EGHDK
    My computer makes a high pitched whine noise that initially was coming from my SSD, but now I recently installed more ram into my laptop and it makes a new sounds. I don't want my ram or SSD to die on me. Are there any tests to test both of these? Again, these are really high pitched whine(y) sounds that you wouldn't hear normally, but when I'm home alone and it's silent, the noise sounds as loud as can be.

    Read the article

  • Adventures in the Land of CloudDB/NoSQL/NoAcid

    - by KKline
    Cloud, Bunny, or CloudBunny? Last year, some of my friends from Quest Software attended Hadoop World in New York. In 2009, I never would've guessed that Quest would be there with products, community initiatives, as a major sponsor and with presenters? There were just under 1,000 attendees who weren’t the typical devheads and geekasaurs you'd normally see at very techie events like Code Camps, SQL Saturdays, Cloud Camps and or even other NoSQL events such as the Cassandra Summit. We're talkin' enterprise...(read more)

    Read the article

  • Architecture for Social Graph data that has a Time Frame Associated?

    - by Jay Stevens
    I am adding some "social" type features to an existing application. There are a limited # of node & edge types. Overall the data itself is relatively small (50,000 - 70,000 for each type of node) there will be a number of edges (relationships) between them (almost all directional). This, I know, is relatively easy to represent with an SDF store (such as BrightstarDB) or something like Microsoft's Trinity (or really many of the noSQL options). The thing that, I think, makes this a unique use case is that each relationship will have a timeframe associated with it (start and end dates). Right now, I'm thinking of just storing this in a relational structure and dealing with the headaches of "traversing the graph", but I'm looking for suggestions on a better approach (both in terms of data structure and server): Column ================ From_Node_ID Relationship To_Node_ID StartDate EndDate Any suggestions or thoughts are welcomed.

    Read the article

  • Forwarding port 3306 on Mac OS X in order to connect to a remote MySQL Database

    - by Jonathan Mayhak
    I'm on Mac OS X 10.6.2 trying to connect to ubuntu server 8.04.1 at linode. ssh -L 127.0.0.1:3306:[[remote ip]]:3306 user@server -N I want to set up ssh tunneling so that I can access a remote mysql server. First of all, I'm told bind: Address already in use. This is only after I've tried the command before. How do I manually close a port forwarding session? Second, when I change the command to be ssh -L 127.0.0.1:3310:[[remote ip]]:3306 user@server -N (I changed the local port to listen on). I'm told channel 1: open failed: connect failed: Connection refused when I try to connect to the MySQL server via MySQL workbench or sequel pro. To connect through MySQL workbench I use the following settings: host: 127.0.0.1 port: 3310 (if 3306 is in use) username: mysql username password: mysql password database: I don't put anything in

    Read the article

  • Stored procedure Naming conventions?

    - by Chris
    One of our senior developers has stated that we should be using a naming convention for stored procedures with an "objectVerb" style of naming such as ("MemberGetById") instead of a "verbObject" type of naming ("GetMemberByID"). The reasoning for this standard is that all related stored procedures would be grouped together by object rather than by the action. While I see the logic for this way of naming things, this is the first time that I have seen stored procedures named this way. My opinion of the naming convention is that the name can not be read naturally and takes some time to determine what the words are saying and what the procedure might do. What are your takes on this? Which way is the more common way of naming a stored proc, and does a what types of stored proc naming conventions have you used or go by?

    Read the article

  • ORA-7445 Troubleshooting

    - by [email protected]
        QUICKLINK: Note 153788.1 ORA-600/ORA-7445 Lookup tool Note 1082674.1 : A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video]   Have you observed an ORA-07445 error reported in your alert log? While the ORA-600 error is "captured" as a handled exception in the Oracle source code, the ORA-7445 is an unhandled exception error due to an OS exception which should result in the creation of a core file.  An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces.  Looking for the best way to diagnose? Whenever an ORA-7445 error is raised a core file is generated.  There may be a trace file generated with the error as well.   Prior to 11g, the core files are located in the CORE_DUMP_DEST directory.   Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data.  Diagnostic files are written into a root directory for all diagnostic data called the ADR home.   Core files at 11g will go to the ADR HOME/cdump directory.   For more information on the Oracle 11g Diagnosability feature see Note 453125.1 11g Diagnosability Frequently Asked Questions Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]   NOTE:  While the core file is captured in the Diagnosability infrastructure, the file may not be included with a diagnostic package.1.  Check the Alert LogThe alert log may indicate additional errors or other internal errors at the time of the problem.   In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors.  The ORA-7445 error can be side effects of the other problems and you should review the first error and associated core file or trace file and work down the list of errors.   Note 1020463.6 DIAGNOSING ORA-3113 ERRORS Note 1812.1 TECH:  Getting a Stack Trace from a CORE file Note 414966.1 RDA Documentation Index   If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If you see a message at the end of the file   "MAX DUMP FILE SIZE EXCEEDED"   the MAX_DUMP_FILE_SIZE parameter is not setup high enough or to 'unlimited'. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult.  Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information. For pointers on deeper analysis of these errors see   Note 390293.1 Introduction to 600/7445 Internal Error Analysis Note 211909.1 Customer Introduction to ORA-7445 Errors 2.  Search 600/7445 Lookup Tool Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 153788.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out key stack pointers from the associated trace file to match up against known bugs.3.  "Fine tune" searches in Knowledge Base As the ORA-7445 error indicates an unhandled exception in the Oracle source code, your search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file. Keep in mind that searches on generic argument data will bring back a large result set.  The more you can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match your problem. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] NOTE:  If no trace file is captured, see Note 1812.1 TECH:  Getting a Stack Trace from a CORE file.  Core files are managed through 11g Diagnosability, but are not packaged with other diagnostic data automatically.  The core files can be quite large, but may be useful during analysis within Oracle Support.4.  If assistance is required from Oracle Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the Alert log  Associated tracefile(s) or incident package at 11g Patch level  information Core file(s)  Information about changes in configuration and/or application prior to  issues  If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors RDA report or Oracle Configuration Manager information Oracle Configuration Manager Quick Start Guide Note 548815.1 My Oracle Support Configuration Management FAQ Note 414966.1 RDA Documentation Index ***For reference to the content in this blog, refer to Note.1092832.1 Master Note for Diagnosing ORA-600

    Read the article

  • Rundum sicher. Ganzheitlich gut beraten.

    - by A&C Redaktion
    IT-Security hat sich unter den Beratungsthemen zu einer ganzheitlichen Disziplin entwickelt. Trotz der großen Nachfrage sind bisher nur ausgewählte Unternehmen in der Lage, das gesamte Spektrum der IT-Sicherheitsberatung umfassend anzubieten. Die TWINSEC GmbH fokussiert sich als Oracle Gold Partner darauf, große und mittelständische Unternehmen bei der Umsetzung von Sicherheitsanforderungen zu beraten. Norbert Drecker, als Geschäftsführer des Kölner Beratungshauses, kann genau beschreiben, warum das Sicherheitsthema in den obersten Führungsetagen inzwischen höchste Priorität hat. Am Beispiel Oracle Identity Analytics erklärt Norbert Drecker, wie sich die Konsolidierung von Berechtigungen realisieren lässt und welchen Vorteil die Rezertifizierung in der konkreten Anwendung bringt. Wenn Sie erfahren wollen, warum Oracle Identity Analytics das richtige Werkzeug für mehr Sicherheit im Unternehmen ist und, wenn Sie wissen wollen, warum TWINSEC auf die intensive Zusammenarbeit mit Oracle setzt, dann schauen Sie sich das Video an.

    Read the article

  • Persisting NLP parsed data

    - by tjb1982
    I've recently started experimenting with NLP using Stanford's CoreNLP, and I'm wondering what are some of the standard ways to store NLP parsed data for something like a text mining application? One way I thought might be interesting is to store the children as an adjacency list and make good use of recursive queries (postgres supports this and I've found it works really well). Something like this: Component ( id, POS, parent_id ) Word ( id, raw, lemma, POS, NER ) CW_Map ( component_id, word_id, position int ) But I assume there are probably many standard ways to do this depending on what kind of analysis is being done that have been adopted by people working in the field over the years. So what are the standard persistence strategies for NLP parsed data and how are they used?

    Read the article

  • Remote Desktop with resource sharing

    - by Malfist
    I recently removed ubuntu from my laptop and installed Windows 7 because Microsoft's DreamSpark program gave me Visual Studio Pro for free and I want to do some C# programming with it. The problem is that my laptop's screen is small, it's resources are extremely limited, and it doesn't have a full sized keyboard. However, I do have a desktop that is my primary system, and it's got a quadcore and tons of RAM in it, and dual monitors. My question is this, is there a way I can use a program from my laptop, on my desktop, and share my desktops CPU and RAM with the laptop? Everything is connected through a 100 MB/s switch. One caveat, my desktop is running ubuntu.

    Read the article

  • Is bigger capacity ram faster then smaller capacity ram for same clock and CL? [migrated]

    - by didibus
    I know that bigger capacity hard-drives with the same RPM are faster then smaller capacity hard-drives. I was wondering if the same is true for ram. Given two ram clocked at 1600mhz and with identical CLs: 9-9-9-24. Is a 2x8 going to perform better then a 2x4 ? Note that I am not asking if having more ram will improve the performance of my PC, I'm asking if the bigger capacity ram performs better. Thank You.

    Read the article

  • Update fails with unrecoverable dpkg fatal error

    - by Jonthue Michel
    Hello I keep receiving this error every-time i try to do some updates, I tried sudo dpkg --configure, sudo apt-get update and sudo apt-get install -f but they failed on me. installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55%dpkg: unrecoverable fatal error, aborting: failed to read on buffer copy for files list for package `libc6-i386': Is a directory

    Read the article

  • Storing lots of large strings with frequent "appends" and few reads

    - by Thiago Moraes
    In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?

    Read the article

  • Oracle Exadata X3 announcement at Oracle Openworld

    - by Javier Puerta
    Oracle Announces Oracle Exadata X3 Database In-Memory MachineOracle Press ReleaseFourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Detailed info at Oracle Exadata Database Machine

    Read the article

  • Rails and Mongoid best way to implement sharing system

    - by Matteo Pagliazzi
    I have to model User and Board in rails using mongoid as ODM. Each board is referenced to an user through a foreign key user_id and now I want to add the ability to share a board with other users. Following CRUD I'd create a new Model called something like Share and it's releated Controller with the ability to create/edit/delete a Share but I have some doubts: First, where to save informations about Shares? I think I may create a field in the Board's collection called shared_with including an array of user ids. in a MySQL I'd created a new table with the ids of who share, the resource shared and the user the resources is shared with but I don't think that's necessary using MongoDB. Every user a Board is shared with should be able to edit the Board (but not to delete it) so the Board should have two relations one with the owner and another with the users the board is shared with, right? For permission (the owner should be able to delete a board but the users it is shared with shouldn't) what to use? I'm using Devise for authentication but I think something like CanCan would fit better. but how to implement it? What do you think about this way? Do you find any problems or have better solutions?

    Read the article

  • Home server - HP Proliant Microserver - Software and setup - OS on USB stick?

    - by Lloyd Watkin
    I've just purchased a HP ProLiant Microserver for home use. I want to set up with web server, samba shares, the usual stuff. My question is really about system setup. It has an internal USB socket so I've attempted to install a copy of Fedora 14 onto it. I turned off X/Gnome, but it still ran like a pig. I've now put the OS on one of the internal disks (250Gb, 7200rpm), but I was wondering if there was a way to utilise the internal USB to give me better power-saving allowing the hard drives to be shut down when not in use. How would you set this server up? I'd rather not go to the extra cost of an SSD right now, but if that's the best way then so be it.

    Read the article

  • Is there a rule of thumb for RAM upgrades?

    - by Retrosaur
    I'm having a hard time figuring out whether or not a certain laptop/computer's RAM can be upgraded or not. Is there a rule of thumb that determines how much max RAM one could add to a system without looking it up via external websites? A little bit of a background information: I work in computer sales at a computer electronics store, so it is virtually impossible for me to install any sort of software that would detect computer specs, and I get a lot of customers who wonder what laptop/desktop RAM upgrades usually are. Is there a certain rule that adding more RAM entails? Does it make a difference if it's a 32-bit or 64-bit machine? OS?

    Read the article

  • 4 GB DDR2 vs 2 GB DDR 3...........

    - by metal gear solid
    I 'm going to purchase new PC. due to my budget limit either i can purchase 2 x 2GB = 4GB DDR 2 or 2 GB Single stick DDR 3. Will 2 GB DDR 3 will give almost same performane compare to 4 GB DDR 2? In future I will upgrade RAM upto 8 GB Which option would be better for me for now and why?

    Read the article

  • The understanding of flight search engine

    - by Jens Jensen
    Today I just discovered a search engine website who offered a service to enter your departure destination, and then search for which possible destinations you can have for the cheapest price. This is very nice to use, if one wants to flight somewhere but doesn't know which "good deals" are available. This is the site: http://www.kayak.com/explore/ Can someone explain to me, which programs are (mostly) used, and summarize how to make this sort of search engine. I think this is very interesting but unfortunately there are not shown all the possible flight tickets and therefore I think this project could be improved.

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >