Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 589/1255 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • iPhone SDK mySQL

    - by Kevin
    Hello everybody, I'm starting on a new iPhone project, and this application mostly relies on mySQL. I have a mysql database running on my computer, and I need this application to send queries to the server to gain information. One example is creating and logging into an account. I have successfully done this on my windows vb.net application, but I know that objective c will be harder. This is basically getting text from the sql database and displaying it on the iPhone application. If someone could simply help me on that, I'll easily get started. So I hope that you guys help me out here :). Thanks Kevin

    Read the article

  • suggestions for Membership in ASP.NET MVC application

    - by mare
    With this question I am mostly looking for answers from people that have implemented the out-of-the-box ASP.NET membership in their own database - I've set up the tables inside my database and as far as I can see they contain mostly what I need but not everything. I will have the notion of a Firm (Company) to which Users will belong so I will have to associate the aspnet_Users with my Firms table (each user will be a member of exactly one firm). If possible, provide some guidelines how did you do it and what I might run into if I have to modify the table design at some point in the future. Preferably I will be using the default Membership provider. I am having trouble to decide whether to go from scratch or use what ASP.NET already offers.

    Read the article

  • SQL query optimization

    - by nvtthang
    I have a problem with my SQL query that take time to get all records from database. Any body help me. Below is a sample of database: order(order_id, order_nm) customer(customer_id, customer_nm) orderDetail(orderDetail_id, order_id, orderDate, customer_id, Comment) I want to get latest customer and order detail information. Here is may solution: I've created a function that GetLatestOrderByCustomer(CusID) to get lastest Customer information. CREATE FUNCTION [dbo].[GetLatestOrderByCustomer] ( @cus_id int ) RETURNS varchar(255) AS BEGIN DECLARE @ResultVar varchar(255) SELECT @ResultVar = tmp.comment FROM ( SELECT TOP 1 orderDate, comment FROM orderDetail WHERE orderDetail.customer_id = @cust_id ) tmp -- Return the result of the function RETURN @ResultVar END Below is my SQL query SELECT customer.customer_id , customer.customer_nm , dbo.GetLatestOrderByCustomer(customer.customer_id) FROM Customer LEFT JOIN orderDetail ON orderDetail.customer_id = customer.customer_id It's take time to run the function. Could anybody suggest me any solutions to make it better? Thanks in advance.

    Read the article

  • Fastest distance lookup given latitude/longitude?

    - by Ryan Detzel
    I currently have just under a million locations in a mysql database all with longitude and latitude information. With this I use another lat/lng to find the distance of certain places in the database but it's not as fast as I want it to be especially with 100+ hits a second. Is there a faster formula or possibly a faster system other than mysql for this? The formula I'm using is this. select name, ( 3959 * acos( cos( radians(42.290763) ) * cos( radians( locations.lat ) ) * cos( radians( locations.lng ) - radians(-71.35368) ) + sin( radians(42.290763) ) * sin( radians( locations.lat ) ) ) ) AS distance from locations where active = 1 HAVING distance < 10 ORDER BY distance;

    Read the article

  • Invalid UTF-8 for Postgres, Perl thinks they're ok

    - by gorilla
    I'm running perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class. These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5 I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag & ignore the bad titles. if(not utf8::valid($title)){ $title="Invalid UTF-8"; } $data->title($title); $data->update(); However perl seems to think that the strings are valid, but it still throws the exceptions. How can I get perl to detect the bad UTF-8?

    Read the article

  • Accessing Content from MDF after attached to SQL

    - by Fidelis
    Hello, I am using WSS 3.0 and trying to restore a mdf. I attached the database in SQL Management Studio, created a new web application, and associated the database to it. When I go into Central Administration- Application Management - Content Databases I see the WSS_Content and it says Sites: 2. One of the sites looks more similar to the other. I was able to log into the Sharepoint web app that has the db attached to it, and while it gives me the basic structure of the backed up site, the content of lists is blank, that was the data I was really after (Announcements, tasks, etc exist as lists but are empty). How do I get access to this?

    Read the article

  • How to optimize this MySQL query

    - by James Simpson
    This query was working fine when the database was small, but now that there are millions of rows in the database, I am realizing I should have looked at optimizing this earlier. It is looking at over 600,000 rows and is Using where; Using temporary; Using filesort (which leads to an execution time of 5-10 seconds). It is using an index on the field 'battle_type.' SELECT username, SUM( outcome ) AS wins, COUNT( * ) - SUM( outcome ) AS losses FROM tblBattleHistory WHERE battle_type = '0' && outcome < '2' GROUP BY username ORDER BY wins DESC , losses ASC , username ASC LIMIT 0 , 50

    Read the article

  • Using a PreparedStatement to persist an array of Java Enums to an array of Postgres Enums

    - by McGin
    I have a Java Enum: public enum Equipment { Hood, Blinkers, ToungTie, CheekPieces, Visor, EyeShield, None;} and a corresponding Postgres enum: CREATE TYPE equipment AS ENUM ('Hood', 'Blinkers', 'ToungTie', 'CheekPieces', 'Visor', 'EyeShield', 'None'); Within my database I have a table which has a column containing an array of "equipment" items: CREATE TABLE "Entry" ( id bigint NOT NULL DEFAULT nextval('seq'::regclass), "date" character(10) NOT NULL, equipment equipment[] ); And finally when I am running my application I have an array of the "Equipment" enums which I want to persist to the database using a Prepared Statement, and for the life of me I can't figure out how to do it. StringBuffer sb = new StringBuffer("insert into \"Entry\" "); sb.append("( \"date\", \"equipment \" )"); sb.append(" values ( ?, ? )"); PreparedStatement ps = db.prepareStatement(sb.toString()); ps.setString("2010-10-10"); ps.set???????????

    Read the article

  • Best practice to display POI in iPhone's MapKit?

    - by iamj4de
    Assuming I have a database of POI with their respective coordinates (longitude & latitude). What would be the "standard" way to display the POI as annotations around the user's current location? To elaborate: Given a zoom level, I guess I have to search through the database for all POI whose distance to the current location < a certain threshold, then create annotations for them. Or is there any smarter way? If the user zooms in/out, moves the map... I will need to redo the whole thing again? It seems that MapKit has a mechanism to cache/reuse annotations. Should I create a lot of them right away and let MapKit decides what to render when the visible region changes? I guess this would make the transition smoother, but also consumes more memory. What is your experience with this? Thanks.

    Read the article

  • How to avoid "The name 'ConfigurationManager' does not exist in the current context" error?

    - by 5YrsLaterDBA
    I am using VS2008. I have a project connect with a database and the connection string is read from App.config via ConfigurationManager. We are using L2E. Now I added a helper project, AndeDataViewer, to have a simple UI to display data from the database for testing/verification purpose. I don't want to create another set of Entity Data Model in the helper project. I just added all related files as a link in the new helper project. When I compile, I got the following error: Error 15 The name 'ConfigurationManager' does not exist in the current context C:\workspace\SystemSoftware\SystemSoftware\src\systeminfo\RuntimeInfo.cs 24 40 AndeDataViewer I think I may need to add another project setting/config related file's link to the helper project from the main project? There is no App.config file in the new helper project. But it looks I cannot add that file's link to the helper project. Any ideas?

    Read the article

  • Logging strategy vs. performance

    - by vtortola
    Hi, I'm developing a web application that has to support lots of simultaneous requests, and I'd like to keep it fast enough. I have now to implement a logging strategy, I'm gonna use log4net, but ... what and how should I log? I mean: How logging impacts in performance? is it possible/recomendable logging using async calls? Is better use a text file or a database? Is it possible to do it conditional? for example, default log to the database, and if it fails, the switch to a text file. What about multithreading? should I care about synchronization when I use log4net? or it's thread safe out of the box? In the requirements appear that the application should cache a couple of things per request, and I'm afraid of the performance impact of that. Cheers.

    Read the article

  • PHP with DB backed-end application to complete windows EXE possible or not?

    - by Devyn
    Hi, Let's say I have a php application backed-end with SQLite and I would like to convert whole application to exe file so the end user can just click it and run on windows. User should be able to save, update and add new data into database without any web server or browser things. Is it possible? I found out that we can use PHP-GTK for UI. exeoutput.com supports with database engines according to it's website. Anyone have tried it out? If I'm missing something, pls share with me. Thanks all in advance and happy new year!!!

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • Cant delete more than 200 contacts in HTC HERO

    - by rahul
    I'm working on security application which will copy all contacts to some other database and delete all contacts from phonebook. I'm testing this on android HTC HERO. I'm successful to delete contacts from phonebook and create new contact info database, Till 200 it is working, but after 200 contacts its not working properly. After tht application starts throwing error. There is one Sync with Google Option in MenuSettingData Sync, I think that is creating problem. There is notification that "Too many contacts deleted" n if i click tht there will b a dialog with title "Delete Limit exceeded". Is there anything i can do to stop syncronization or any other ideas by which i can achieve required output? Please Help me on this

    Read the article

  • Increasing SQL Server / Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/SQL Server VM all the RAM I can (12GB) & SQL Server RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL Server VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • ghettoVCB issue

    - by romgo75
    I have setup a ghettoVCB script in order to backup three VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folders, one for each VM. In each folder I have the following files: -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finish the process. When I read the logs concerning this folder I get: 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run ghettoVCB anymore because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle rotation of the VM backups? Any ideas? Thanks!

    Read the article

  • Disable Autocommit with Spring/Hibernate/C3P0/Atomikos ?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 and MySQL. Currently I am using Atomikos for transactions and C3P0 for connection pooling. Despite my best efforts I am still seeing this in the log file (and DAO integration tests are failing with org.hibernate.NonUniqueObjectException even though Spring Test should be rolling back the database after each test method): [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows The connection URL to H2 has AUTOCOMMIT=OFF, but according to the H2 documentation: this will not work as expected when using a connection pool (the connection pool manager will re-enable autocommit when returning the connection to the pool, so autocommit will only be disabled the first time the connection is used So I figured (apparently correctly) that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • ASP.NET MVC 2 RC2 Model Binding with NVARCHAR NOT NULL column

    - by Gary McGill
    I have a database column defined as NVARCHAR(1000) NOT NULL DEFAULT(N'') - in other words, a non-nullable text column with a default value of blank. I have a model class generated by the Linq-to-SQL Classes designer, which correctly identifies the property as not nullable. I have a TextAreaFor in my view for that property. I'm using UpdateModel in my controller to fetch the value from the form and populate the model object. If I view the web page and leave the text area blank, UpdateModel insists on setting the property to NULL instead of empty string. (Even if I set the value to blank in code prior to calling UpdateModel, it still overwrites that with NULL). Which, of course, causes the subsequent database update to fail. I could check all such properties for NULL after calling UpdateModel, but that seems ridiculous - surely there must be a better way? Please don't tell me I need a custom model binder for such a simple scenario...!

    Read the article

  • C# Design Questions

    - by guazz
    How to approach unit testing of private methods? I have a class that loads Employee data into a database. Here is a sample: public class EmployeeFacade { public Employees EmployeeRepository = new Employees(); public TaxDatas TaxRepository = new TaxDatas(); public Accounts AccountRepository = new Accounts(); //and so on for about 20 more repositories etc. public bool LoadAllEmployeeData(Employee employee) { if (employee == null) throw new Exception("..."); EmployeeRepository emps = new EmployeeRepository(); bool exists = emps.FetchExisting(emps.Id); if (!exists) { emps.AddNew(); } try { emps.Id = employee.Id; emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName; emps.SomeOtherAttribute; } catch() {} try { emps.Save(); } catch(){} try { LoadorUpdateTaxData(employee.TaxData); } catch() {} try { LoadorUpdateAccountData(employee.AccountData); } catch() {} ... etc. for about 20 more other employee objects } private bool LoadorUpdateTaxData(employeeId, TaxData taxData) { if (taxData == null) throw new Exception("..."); ...same format as above but using AccountRepository } private bool LoadorUpdateAccountData(employee.TaxData) { ...same format as above but using TaxRepository } } I am writing an application to take serialised objects(e.g. Employee above) and load the data to the database. I have a few design question that I would like opinions on: A - I am calling this class "EmployeeFacade" because I am (attempting?) to use the facade pattern. Is it good practace to name the pattern on the class name? B - Is it good to call the concrete entities of my DAL layer classes "Repositories" e.g. "EmployeeRepository" ? C - Is using the repositories in this way sensible or should I create a method on the repository itself to take, say, the Employee and then load the data from there e.g. EmployeeRepository.LoadAllEmployeeData(Employee employee)? I am aim for cohesive class and but this will requrie the repository to have knowledge of the Employee object which may not be good? D - Is there any nice way around of not having to check if an object is null at the begining of each method? E - I have a EmployeeRepository, TaxRepository, AccountRepository declared as public for unit testing purpose. These are really private enities but I need to be able to substitute these with stubs so that the won't write to my database(I overload the save() method to do nothing). Is there anyway around this or do I have to expose them? F - How can I test the private methods - or is this done (something tells me it's not)? G- "emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName;" this breaks the Law of Demeter but how do I adjust my objects to abide by the law?

    Read the article

  • Windows 7 Black Screen On Boot, Seperate Bootable VHD Works Fine

    - by David Osborn
    I have a Window 7 x64 install with a bootable VHD (also Windows 7 x64). I was having problems getting my homeserver to do backups (VSS erred) so I ran check disk and used a tool from MS (cleanc2r.exe) to remove an empty Q drive from the VHD that I believe was a result of installing Office 2010 Beta. (All of this was done on the bootable VHD, not the main install.) Now I can't boot into the main install. It gets past the Starting Windows screen and then goes black. I can still boot into the bootable VHD and everything works fine from there. I have tried to boot the main install in Safe Mode/Safe Mode with Networking/and Safe Mode command prompt and it has the same issue. I ran chkdsk /r on the main install and after doing all the work there was a message about correcting some free space that was marked as allocated and also that it was unable to make an entry into the event log. I tried the startup repair utility and it found no problems. I don't see the setting for restore to last know good configuration so I couldn't do that. I don't recall installing anything new to the main install nor having hooked up any new hardware recently.

    Read the article

  • Increasing MSSQL/Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/MSSQL VM all the RAM I can (12GB) & SQL RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Favorite tricks with linux kernel boot parameters?

    - by ~drpaulbrewer
    Most linux bootloaders let you edit the kernel boot command line before booting. There are often lots of parameters available -- Knoppix, for instance, has a list on their Knoppix Cheat Codes page -- but most are applicable only to compatibility and special situations. A few are hidden gems. Common usages of these codes are to boot to single-user mode, alter screen mode or drivers, or to specify an alternative root directory. Other more exotic uses are possible. Some linux distributions let you copy the boot cd into ram. Others (e.g., Ubuntu) let you use preseed files to clone installs when setting up multiple systems -- useful when installing a lab full of computers without having to baby sit each install. What other tricks have you found useful in system installs, repairs, backups, restores, establishing temporary servers, or other tasks? To add your favorite trick to the list: As much of the code for these options goes on either in initrd, or in a service handler that detects the kernel parameters, please list *(1) the kernel boot line parameter, (2) what it does, (3) the linux distribution and any required packages to activate the feature*. Thanks.

    Read the article

  • sql query to incrementally modify where condition until result contains what is required

    - by iamrohitbanga
    I need an sql query to select some items from a table based on some condition which is based on a category field. As an example consider a list of people and I fetch the people from a particular age group from the database. I want to check if the result contains at least one result from each of a number of categories. If not I want to modify the age group by extending it and check the results again. This is repeated until I get an age group for which one result is present for each category. Right now i am doing this by analyzing the results and modifying the sql query. So a number of sql select queries are sent. What is the most efficient way of doing this? I am invoking the select queries from a java program using jdbc. I am using mysql database.

    Read the article

  • Best practices in ASP.Net code behind pages.

    - by patricks418
    Hi, I am an experienced developer but I am new to web application development. Now I am in charge of developing a new web application and I could really use some input from experienced web developers out there. I'd like to understand exactly what experienced web developers do in the code-behind pages. At first I thought it was best to have a rule that all the database access and business logic should be performed in classes external to the code-behind pages. My thought was that only logic necessary for the web form would be performed in the code-behind. I still think that all the business logic should be performed in other classes but I'm beginning to think it would be alright if the code-behind had access to the database to query it directly rather than having to call other classes to receive a dataset or collection back. Any input would be appreciated.

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >