Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 606/1255 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • Entity Framework: Data Centric vs. Object Centric

    - by Eric J.
    I'm having a look at Entity Framework and everything I'm reading takes a data centric approach to explaining EF. By that I mean that the fundamental relationships of the system are first defined in the database and objects are generated that reflect those relationships. Examples Quickstart (Entity Framework) Using Entity Framework entities as business objects? The EF documentation implies that it's not necessary to start from the database layer, e.g. Developers can work with a consistent application object model that can be mapped to various storage schemas When designing a new system (simplified version), I tend to first create a class model, then generate business objects from the model, code business layer stuff that can't be generated, and then worry about persistence (or rather work with a DBA and let him worry about the most efficient persistence strategy). That object centric approach is well supported by ORM technologies such as (n)Hibernate. Is there a reasonable path to an object centric approach with EF? Will I be swimming upstream going that route? Any good starting points?

    Read the article

  • Get list of named queries in NHibernate

    - by Dan
    I have a dozen or so named queries in my NHibernate project and I want to execute them against a test database in unit tests to make sure the syntax still matches the changing domain/database model. Currently I have a unit test for each named query where I get and execute the query, for example: IQuery query = session.GetNamedQuery("GetPersonSummaries"); var personSummaryArray = query.List(); Assert.That(personSummaryArray, Is.Not.Null); This works fine, but I would like to have one unit test that loops thru all of the named queries and executes them. Is there a way to discover all of the available named queries? Thanks Dan

    Read the article

  • regenerate the ID row in a MySQL Table with a Mysql's Script or PHP

    - by DomingoSL
    Hello, i have a database fill with information of the users who use my webpage. The table as many MySql tables have the ID parameters who is autoincrement. The issue is that when somebody eliminate his account from the site, in the database remain a jump in the sequence that i dont want cuz i have a script who fail if find some jump in the ID. Ex. ID Name PASS 1 Jhon 1234 2 Max 2233 3 Jorge 2232 If Max get out and a new user go in, this is what will happend. ID Name PASS 1 Jhon 1234 4 NewU 1133 3 Jorge 2232 So what is the best way to erase some body from the data base in order to avoid this isuue, or if is not a way, its posible to do a PHP or MySql script who eliminate all the contents in the ID row and regenerate it in order? Thanks A lot! sorry for my english

    Read the article

  • How can I take a one time full backup of a Windows Server without the need for a restore program?

    - by TheCleaner
    I have a Windows SBS server with about 500GB of data that I'm decommissioning but I'd like to take a final backup of the server and place it on an external USB drive. I already have multiple backups of the server on disk from the past but they are through Simpana Commvault. I'd like a backup that will simply copy the file structure, ACLs, timestamps, etc. as is to a NTFS volume on the external drive. This way if someone says "I need x file on the server you decommed" I can search the external drive real quick instead of firing up Commvault, cataloging, restore, etc. I know the built-in Windows backup is great, I just don't feel like running it for a restore job on this. I'd like an option where in the future it won't require a program to run a restore. Rather a simple mount of the drive will suffice. I believe I can use robocopy just fine, but I'm not sure if it will grab the Windows directory, system files, and full user profiles correctly even with the /ZB option. Options? Is Robocopy /E /ZB /COPYALL /DCOPY:DAT /MT:32 /R:5 /W:5 /LOG:copylog.log the way to go here?

    Read the article

  • Is it possible to use WIndows Speech Recognition Engine in a word pronunciation game?

    - by XBasic3000
    I use to create an application that uses the windows speech recognition engine or the SAPI. its like a game for pronunciation that it give you score when you pronounce it correctly. but when i started experiments with SAPI, it has poor recognition unless if you load a grammar on it (XML) its give best recognition result. but the problem now is closest pronunciation from the input text will be recognize. for example: Database - dedebase - correct. even if you mispronounce it. it gives you correct answers. without using the xml grammar when you say database it give you "in the base/the base/data base/etc..." please post your answer,suggestion,clarification. votes for best answer. is it possible or not? by the way i use delphi compiler on the projects....

    Read the article

  • Data Store/Volume disconnecting. How to resume copy of VMDK?

    - by Serge
    I'm having an issue with my ESXi 4.1 hosts losing the datastore with FC SAN after a power outage. All 3 hosts disconnect so it's definitely a SAN issue. I've tried to resolve the issue on the SAN side with the SAN software support and Adaptec hardware support. No luck there. So I'm stuck with a SAN that will randomly disconnect the volume. I need to get the virtual machines (VMDK files) from the datastore. The problem is I can only get 5-20% before the data store disconnects. I have backups that are slightly older that I can use to replicate the VMDK differences to. What has not worked so far: Powering up the VMs, will boot up for 5-15 minutes then freeze vCenter migrate or clone of VM, will fail after similar period of time vCenter copy/paste of VMDK. Was able to get one 30GB VMDK and no luck after that. vMware Data Recovery. Fails at low %, can't resume, so next backup starts from begining. Veeam Backup & Recovery. Same as above, no resume function. If I can just find a backup solution that will resume from the failed spot that would solve my issue. Anyone have any ideas that I could try? EDIT 1 The SAN is Open-E DSS 6 running on a Supermicro 24 drive enclosure with 4 port Qlogic FC. Adaptec 52445 RAID card.

    Read the article

  • Echo autoincrement id doubt

    - by Marcelo
    Hi, can I print the id, even if it's autoincrement ? Because the way I'm doing I'm using an empty variable for id. $id= ""; mysql_connect(localhost,$username,$password); @mysql_select_db($database) or die ("Não conectou com a base $database"); mysql_query("INSERT INTO table1(id,...) VALUES ('".$id."',....)") or die(mysql_error()); mysql_close(); echo "Your id is :"; echo "".$id; I'm trying to print the id, but it's coming blank. I checked the table and there's an id number there. How can I print it then at? Thanks for the attention

    Read the article

  • Converting DAO to ADO

    - by webworm
    I am working with an Access 2003 database that has a subroutine using DAO code. This code loops through the table definitions and refreshes the ODBC connection string. I would like to convert this to ADO so I do not have to reference the DAO object library. Here is the code ... Public Sub RefreshODBCLinks(newConnectionString As String) Dim db As DAO.Database Dim tb As DAO.TableDef Set db = CurrentDb For Each tb In db.TableDefs If Left(tb.Connect, 4) = "ODBC" Then tb.Connect = newConnectionString tb.RefreshLink Debug.Print "Refreshed ODBC table " & tb.Name End If Next tb Set db = Nothing MsgBox "New connection string is " & newConnectionString, vbOKOnly, "ODBC Links refreshed" End Sub The part I am unsure of is how to loop through the tables and get/set their connection strings.

    Read the article

  • Rails CSV import, adding to a related table

    - by Jack
    Hi, I have a csv importing system on my app (used locally only) which parses the csv file line by line and adds the data to the database table. This is based on a tutorial here. require 'csv' def csv_import @parsed_file=CSV::Reader.parse(params[:dump][:file]) n = 0 @parsed_file.each_with_index do |row, i| next if i == 0 #ignore the first row course = Course.new course.title = row[0] course.unit_code = row[1] course.course_type = row[2] course.value = row[3] course.pass_mark = row[4] if course.save n = n+1 GC.start if n%50==0 end flash.now[:message] = "CSV Import Successful, #{n} new courses added to the database." end redirect_to(courses_url) end This is all in the courses controller and works fine. There is a relationship that courses HABTM years and years HABTM courses. In the csv file (effectively in row[5] to row[8]) are the year_id s. Is there a way that I can add this within the method above. I am confused as to how to loop over the 4 items and add them to the courses_years table. Thank you Jack

    Read the article

  • wamp server not working? or bad php code

    - by lclaud
    I have this PHP code: <?php $username="root"; $password="******";// censored out $database="bazadedate"; mysql_connect("127.0.0.1",$username,$password); // i get unknown constant localhost if used instead of the loopback ip @mysql_select_db($database) or die( "Unable to select database"); $query="SELECT * FROM backup"; $result=mysql_query($query); $num=mysql_numrows($result); $i=0; $raspuns=""; while ($i < $num) { $data=mysql_result($result,$i,"data"); $suma=mysql_result($result,$i,"suma"); $cv=mysql_result($result,$i,"cv"); $det=mysql_result($result,$i,"detaliu"); $raspuns = $raspuns."#".$data."#".$suma."#".$cv."#".$det."@"; $i++; } echo "<b> $raspuns </b>"; mysql_close(); ?> And it should return a single string containing all data from the table. But it says "connection reset when loading page". the log is : [Tue Jun 15 16:20:31 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Tue Jun 15 16:20:31 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Tue Jun 15 16:20:31 2010] [notice] Server built: Dec 10 2008 00:10:06 [Tue Jun 15 16:20:31 2010] [notice] Parent: Created child process 2336 [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Child process is running [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Acquired the start mutex. [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Starting 64 worker threads. [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Starting thread to listen on port 80. [Tue Jun 15 16:20:35 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Tue Jun 15 16:20:35 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Tue Jun 15 16:20:35 2010] [notice] Server built: Dec 10 2008 00:10:06 [Tue Jun 15 16:20:35 2010] [notice] Parent: Created child process 1928 [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Child process is running [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Acquired the start mutex. [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Starting 64 worker threads. [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Starting thread to listen on port 80. Any idea why it outputs nothing?

    Read the article

  • WebSphere MQ/MQSeries - Possible to send a message to multiple queues with single call?

    - by Jeffrey White
    I'm queuing messages to a WebSphere MQ queue (NB: A point-to-point queue -- not a topic) using a stored procedure in my Oracle database. Is there a way to publish each message to multiple queues with a single call? What I would like is to find a solution that would incur zero additional latency on my database compared to sending the message to a single queue. Solutions that involve changing my WebSphere MQ settings are certainly welcome! What I had in mind was somehow creating a "clone" queue that got all the same messages as the original one, but I've been unable to locate anything like this in the documentation. Thanks, Jeff

    Read the article

  • Android v1.5 w/ browser data storage

    - by Sirber
    I'm trying to build an offline web application which can sync online if the network is available. I tryed jQuery jStore but the test page stop at "testing..." whitout result, then I tryed Google Gears which is supposed to be working on the phone but it gears is not found. if (window.google && google.gears) { google.gears.factory.getPermission(); // Database var db = google.gears.factory.create('beta.database'); db.open('cominar-compteurs'); db.execute('create table if not exists Lectures' + ' (ID_COMPTEUR int, DATE_HEURE timestamp, kWh float, Wmax float, VAmax float, Wcum float, VAcum float);'); } else { alert('Google Gears non trouvé.'); } the code does work on Google Chrome v5.

    Read the article

  • Adding a row to an existing datatable in JSF

    - by shyamb
    Hi, I have a requirement of changing an existing JSF 1.1 project where I need to add an additional row to a datatable on click of a button. Currently the datatable loads 3 rows from the backing bean and this new button should add additional rows to the datatable on each click. Using the suggestion provided by http://balusc.blogspot.com/2006/06/using-datatables.html I was able to display the additional row on the UI but I could not save the new data back to the database because the backing bean is in request scope and I cannot change the scope of this bean as it would create other issues. Can somebody provide me a solution to display the new row and also to save the data back to the database when the backing bean is in request scope. Thanks Shyam

    Read the article

  • NHibernate - is property lazy loading possible?

    - by Ben
    I've got some binary data that I store and was going to separate this out into a separate table so it could be lazy loaded. However, i then came across this post by Ayende (http://ayende.com/Blog/archive/2010/01/27/nhibernate-new-feature-lazy-properties.aspx) which suggests that property lazy loading is now possible. I have added the lazy="true" attribute to my property mapping but the field is still loaded from the database (I am using a simple text field to test). My query: return _session.CreateQuery("from Product") .SetMaxResults(1) .UniqueResult<Product>(); Mapping: <property name="Description" type="string" column="FullDescription" lazy="true"/> Has anyone been able to get this working? Personally I prefer this approach than having to add another table to my database.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • What are the Pros & Cons of using SQL Azure for existing apps on dedicated servers

    - by Mark Redman
    We currently own our own servers, and rent a rack in a datacentre. Looking at the pricing, scalabilty and SLAs for Azure SQL, I am thinking that it might be viable to only use Azure SQL but continue to use our existing applications on our own servers in a datacentres. This will enable us to not worry about the database and its infrastructure so we can concentrate on building an application server farm with disk storeage for files etc. Our application is quite big and has various windows services and parts of it used unmanaged libraries that may not be feasible in the cloud, so probably coulnt have everything in the Azure cloud. The pros: Reduced Total Cost of ownership (no database servers, no sql server licenses) The Cons: I guess there would be overhead in the transfer of data between the Azure Cloud and our datacentre (ie cloud may be in US and datacentre is in the UK) but would this overhead be usable?

    Read the article

  • Mysql-how to update the "domain.com" in "[email protected]"

    - by w00t
    Hi there, In my database I have a lot of users who've misspelled their e-mail address. This in turn causes my postfix to bounce a lot of mails when sending the newsletter. Forms include (but are not limited to) "yaho.com", "yahho .com" etc. Very annoying! So i have been trying to update those record to the correct value. After executing select email from users where email like '%@yaho%' and email not like '%yahoo%'; and getting the list, I'm stuck because I do not know how to update only the yaho part. I need the username to be left intact. So I thought I would just dump the database and use vim to replace, but I cannot escape the @ symbol.. BTW, how do I select all email addresses written in CAPS? select upper(email) from users; would just transform everything into CAPS, whereas I just needed to find out the already-written-in-CAPS mails.

    Read the article

  • How to get nearby POIs

    - by balexandre
    I have a database with Points of Interest that all have an address. I want to know what is the method/name/call to get all nearby POIs from a given position. I understand that I need to convert all my addresses to LAT / LON coordinates at least, but my question is: for a given LAT / LONG how do I get from the database/array what POIs are nearby by distance, for example: You are here 0,0 nearest POIs in a 2km radius are: POI A (at 1.1 Km) POI C (at 1.3 Km) POI F (at 1.9 Km) I have no idea what should I look into to get what I want :-( Any help is greatly appreciated. Thank you

    Read the article

  • How to "defragment" MongoDB index effectively in production?

    - by dfrankow
    I've been looking at MongoDB. Feels good. I added some indexes to a collection, uploaded a bunch of data, then removed all the data, and I noticed the indexes did not change size, similar to the behavior reported here. If I call db.repairDatabase() the indexes are then squashed to near-zero. Similarly if I don't remove all the data, but call repairDatabase(), the indexes are squashed somewhat (perhaps because unused extends are truncated?). I am getting index size from "totalIndexSize" of db.collection.stats(). However, that takes a long time (I've read it could be hours on a large database). It's unclear to me how available the database is for reads or writes while it is running. I am guessing not so available. Since I want to run as few instances of mongod as possible, I want to understand more about how indexes are managed after deletes. Can anyone point me to anything or give any advice?

    Read the article

  • node.js with SQL Server Native Client 11 scope_identity not being returned

    - by binderbound
    I'm having trouble with inserting a value into a database through node.js. Here is the offending code: sql.query(conn_str ,"INSERT INTO Login(email, hash, salt, firstName, lastName) VALUES(?, ?, ?, ?, ?); SELECT SCOPE_IDENTITY() AS 'Identity';" , [email, hash, salt, firstName, lastName], function(err, results){ console.log(results) } Unfortunately, the console is just echoing [], meaning results is an empty array, I suppose. Does anyone know why the identity is not being returned? Even if it was null, why isn't results then [{Identity: null }] ? Database is on Azure, which does have a "Scope_Identity" function, and the native client also recognises this function. Using node package "msnodesql" Please Help

    Read the article

  • How do I mock a custom field that is deleted so that south migrations run?

    - by muhuk
    I have removed an app that contained a couple of custom fields from my project. Now when I try to run my migrations I get ImportError, naturally. These fields were very basic customizations like below: from django.db.models.fields import IntegerField class SomeField(IntegerField): def get_internal_type(self): return "SomeField" def db_type(self, connectio=None): return 'integer' def clean(self, value): # some custom cleanup pass So, none of them contain any database level customizations. When I removed this code, I've created migrations so the subsequent migration all ran fine. But when I try to run them on a pre-deletion database I realized my mistake. I can re-create a bare-bones app and make these imports work, but Ideally I would like to know if South has a mechanism to resolve these issues? Or is there any best practises? It would be cool if I could solve these issues just by modifying my migrations and not touching the codebase. (Django 1.3, South 0.7.3)

    Read the article

  • Recommended Tutorials for php mysql and graphs

    - by Vinit Joshi
    I need help to find a tutorial or anything to help me create a comparison chart. The user is able to search for device names. The information about the device names is in a drop down box dynamically added from the database. I want the user to be able to select two separate devices and view the devices information plotted onto a graph. I'll be grateful for keywords that I can search for or any tutorials that will help me to carry on with this task. Currently on my screen I can see the data that has been inserted into the database. This data is placed inside a table. I have so far used xhtml, php and mysql. I've tried to make the question as clear as possible so sorry if it does confuse anyone.

    Read the article

  • Error in MySQL Workbench Forward Engineer Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >