Search Results

Search found 30293 results on 1212 pages for 'database insider'.

Page 565/1212 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • NHibernate - is property lazy loading possible?

    - by Ben
    I've got some binary data that I store and was going to separate this out into a separate table so it could be lazy loaded. However, i then came across this post by Ayende (http://ayende.com/Blog/archive/2010/01/27/nhibernate-new-feature-lazy-properties.aspx) which suggests that property lazy loading is now possible. I have added the lazy="true" attribute to my property mapping but the field is still loaded from the database (I am using a simple text field to test). My query: return _session.CreateQuery("from Product") .SetMaxResults(1) .UniqueResult<Product>(); Mapping: <property name="Description" type="string" column="FullDescription" lazy="true"/> Has anyone been able to get this working? Personally I prefer this approach than having to add another table to my database.

    Read the article

  • Adding a row to an existing datatable in JSF

    - by shyamb
    Hi, I have a requirement of changing an existing JSF 1.1 project where I need to add an additional row to a datatable on click of a button. Currently the datatable loads 3 rows from the backing bean and this new button should add additional rows to the datatable on each click. Using the suggestion provided by http://balusc.blogspot.com/2006/06/using-datatables.html I was able to display the additional row on the UI but I could not save the new data back to the database because the backing bean is in request scope and I cannot change the scope of this bean as it would create other issues. Can somebody provide me a solution to display the new row and also to save the data back to the database when the backing bean is in request scope. Thanks Shyam

    Read the article

  • Get list of named queries in NHibernate

    - by Dan
    I have a dozen or so named queries in my NHibernate project and I want to execute them against a test database in unit tests to make sure the syntax still matches the changing domain/database model. Currently I have a unit test for each named query where I get and execute the query, for example: IQuery query = session.GetNamedQuery("GetPersonSummaries"); var personSummaryArray = query.List(); Assert.That(personSummaryArray, Is.Not.Null); This works fine, but I would like to have one unit test that loops thru all of the named queries and executes them. Is there a way to discover all of the available named queries? Thanks Dan

    Read the article

  • BASH Install Of Wordpress, Without Visiting wp-admin/install.php

    - by user916825
    I wrote this little BASH script that creates a folder,unzips Wordpress and creates a database for a site. The final step is actually installing Wordpress, which usually involves pointing your browser to install.php and filling out a form in the GUI. I want to do this from the BASH shell, but can't figure out how to invoke wp_install() and pass it the parameters it needs: -admin_email -admin_password -weblog_title -user_name (line 85 in install.php) Here's a similar question, but in python #!/bin/bash #ask for the site name echo "Site Name:" read name # make site directory under splogs mkdir /var/www/splogs/$name dirname="/var/www/splogs/$name" #import wordpress from dropbox cp -r ~/Dropbox/Web/Resources/Wordpress/Core $dirname cd $dirname #unwrap the double wrap mv Core/* ./ rm -r Core mv wp-config-sample.php wp-config.php sed -i 's/database_name_here/'$name'/g' ./wp-config.php sed -i 's/username_here/root/g' ./wp-config.php sed -i 's/password_here/mypassword/g' ./wp-config.php cp -r ~/Dropbox/Web/Resources/Wordpress/Themes/responsive $dirname/wp-content/t$ cd $dirname CMD="create database $name" mysql -uroot -pmypass -e "$CMD" How do I alter the script to automatically run the installer without the need to open a browser?

    Read the article

  • Is it possible to use WIndows Speech Recognition Engine in a word pronounciation game?

    - by XBasic3000
    I use to create an application that uses the windows speech recognition engine or the SAPI. its like a game for pronounciation that it give you score when you pronounce it correctly. but when i started experiments with SAPI, it has poor recognition unless if you load a grammar on it (XML) its give best recognition result. but the problem now is closest pronounciation from the input text will be recognize. for example: Database - dedebase - correct. even if you mispronounce it. it gives you correct answers. without using the xml grammar when you say database it give you "in the base/the base/data base/etc..." please post your answer,suggestion,clarication. votes for best answer. is it posible or not? by the way i use delphi compiler on the projects....

    Read the article

  • Rails CSV import, adding to a related table

    - by Jack
    Hi, I have a csv importing system on my app (used locally only) which parses the csv file line by line and adds the data to the database table. This is based on a tutorial here. require 'csv' def csv_import @parsed_file=CSV::Reader.parse(params[:dump][:file]) n = 0 @parsed_file.each_with_index do |row, i| next if i == 0 #ignore the first row course = Course.new course.title = row[0] course.unit_code = row[1] course.course_type = row[2] course.value = row[3] course.pass_mark = row[4] if course.save n = n+1 GC.start if n%50==0 end flash.now[:message] = "CSV Import Successful, #{n} new courses added to the database." end redirect_to(courses_url) end This is all in the courses controller and works fine. There is a relationship that courses HABTM years and years HABTM courses. In the csv file (effectively in row[5] to row[8]) are the year_id s. Is there a way that I can add this within the method above. I am confused as to how to loop over the 4 items and add them to the courses_years table. Thank you Jack

    Read the article

  • Extending configuration for .Net 3.5 Applications

    - by Maximiliano Rios
    Due to a requirement in my current project, I have to build a configuration manager to handle configurations that merge local config info with database one. Custom configuration doesn't fit my needs, problem is that I don't know what's the type before loading certain information, for example: Loading database information I will able to know what's myhandler's type. Not previously. So I thought to write my own handler but I can't let set blank as type for sections, in fact .net requires to know what's the type to match myhandler nodes. I'm thinking on building a different parser to read XML nodes but I would prefer to match this structure. I've not found any information to do that yet, is there any way? Can I extend or hook up something into the framework to be capable of loading on-the-fly types and validate nodes? Thanks in advance.

    Read the article

  • Customized User Registration Form

    - by Nitz
    Hey Guys, i have made user-register.tpl.php file. And i have set many text field in that. But now i need that.... i want to store the users information to the database. bcz i have created the customized registration page, so i need that my text field values should be store in the database. like this....... Username: <input type="text" name="myuser" id="myuser" /> Now i want to store the username, which will entered in this myuser text filed. NitishPanchjanya Corporation

    Read the article

  • How to "defragment" MongoDB index effectively in production?

    - by dfrankow
    I've been looking at MongoDB. Feels good. I added some indexes to a collection, uploaded a bunch of data, then removed all the data, and I noticed the indexes did not change size, similar to the behavior reported here. If I call db.repairDatabase() the indexes are then squashed to near-zero. Similarly if I don't remove all the data, but call repairDatabase(), the indexes are squashed somewhat (perhaps because unused extends are truncated?). I am getting index size from "totalIndexSize" of db.collection.stats(). However, that takes a long time (I've read it could be hours on a large database). It's unclear to me how available the database is for reads or writes while it is running. I am guessing not so available. Since I want to run as few instances of mongod as possible, I want to understand more about how indexes are managed after deletes. Can anyone point me to anything or give any advice?

    Read the article

  • wamp server not working? or bad php code

    - by lclaud
    I have this PHP code: <?php $username="root"; $password="******";// censored out $database="bazadedate"; mysql_connect("127.0.0.1",$username,$password); // i get unknown constant localhost if used instead of the loopback ip @mysql_select_db($database) or die( "Unable to select database"); $query="SELECT * FROM backup"; $result=mysql_query($query); $num=mysql_numrows($result); $i=0; $raspuns=""; while ($i < $num) { $data=mysql_result($result,$i,"data"); $suma=mysql_result($result,$i,"suma"); $cv=mysql_result($result,$i,"cv"); $det=mysql_result($result,$i,"detaliu"); $raspuns = $raspuns."#".$data."#".$suma."#".$cv."#".$det."@"; $i++; } echo "<b> $raspuns </b>"; mysql_close(); ?> And it should return a single string containing all data from the table. But it says "connection reset when loading page". the log is : [Tue Jun 15 16:20:31 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Tue Jun 15 16:20:31 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Tue Jun 15 16:20:31 2010] [notice] Server built: Dec 10 2008 00:10:06 [Tue Jun 15 16:20:31 2010] [notice] Parent: Created child process 2336 [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Child process is running [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Acquired the start mutex. [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Starting 64 worker threads. [Tue Jun 15 16:20:31 2010] [notice] Child 2336: Starting thread to listen on port 80. [Tue Jun 15 16:20:35 2010] [notice] Parent: child process exited with status 255 -- Restarting. [Tue Jun 15 16:20:35 2010] [notice] Apache/2.2.11 (Win32) PHP/5.3.0 configured -- resuming normal operations [Tue Jun 15 16:20:35 2010] [notice] Server built: Dec 10 2008 00:10:06 [Tue Jun 15 16:20:35 2010] [notice] Parent: Created child process 1928 [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Child process is running [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Acquired the start mutex. [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Starting 64 worker threads. [Tue Jun 15 16:20:35 2010] [notice] Child 1928: Starting thread to listen on port 80. Any idea why it outputs nothing?

    Read the article

  • What SQL is being sent from a SqlCommand object

    - by Justin808
    I have a SqlCommand object on my c# based asp.net page. The SQL and the passed parameters are working the majority of the time. I have one case that is not working, I get the following error: String or binary data would be truncated. The statement has been terminated. I understand the error and but all the columns in the database should be long enough to hold everything being sent. My questions, Is there a way to see what the actual SQL being sent to the database is from SqlCommand object? I would like to be able to email the SQL when an error occurs. Thanks, Justin

    Read the article

  • What are the Pros & Cons of using SQL Azure for existing apps on dedicated servers

    - by Mark Redman
    We currently own our own servers, and rent a rack in a datacentre. Looking at the pricing, scalabilty and SLAs for Azure SQL, I am thinking that it might be viable to only use Azure SQL but continue to use our existing applications on our own servers in a datacentres. This will enable us to not worry about the database and its infrastructure so we can concentrate on building an application server farm with disk storeage for files etc. Our application is quite big and has various windows services and parts of it used unmanaged libraries that may not be feasible in the cloud, so probably coulnt have everything in the Azure cloud. The pros: Reduced Total Cost of ownership (no database servers, no sql server licenses) The Cons: I guess there would be overhead in the transfer of data between the Azure Cloud and our datacentre (ie cloud may be in US and datacentre is in the UK) but would this overhead be usable?

    Read the article

  • Error in MySQL Workbench Forward Engineer Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • node.js with SQL Server Native Client 11 scope_identity not being returned

    - by binderbound
    I'm having trouble with inserting a value into a database through node.js. Here is the offending code: sql.query(conn_str ,"INSERT INTO Login(email, hash, salt, firstName, lastName) VALUES(?, ?, ?, ?, ?); SELECT SCOPE_IDENTITY() AS 'Identity';" , [email, hash, salt, firstName, lastName], function(err, results){ console.log(results) } Unfortunately, the console is just echoing [], meaning results is an empty array, I suppose. Does anyone know why the identity is not being returned? Even if it was null, why isn't results then [{Identity: null }] ? Database is on Azure, which does have a "Scope_Identity" function, and the native client also recognises this function. Using node package "msnodesql" Please Help

    Read the article

  • How do I mock a custom field that is deleted so that south migrations run?

    - by muhuk
    I have removed an app that contained a couple of custom fields from my project. Now when I try to run my migrations I get ImportError, naturally. These fields were very basic customizations like below: from django.db.models.fields import IntegerField class SomeField(IntegerField): def get_internal_type(self): return "SomeField" def db_type(self, connectio=None): return 'integer' def clean(self, value): # some custom cleanup pass So, none of them contain any database level customizations. When I removed this code, I've created migrations so the subsequent migration all ran fine. But when I try to run them on a pre-deletion database I realized my mistake. I can re-create a bare-bones app and make these imports work, but Ideally I would like to know if South has a mechanism to resolve these issues? Or is there any best practises? It would be cool if I could solve these issues just by modifying my migrations and not touching the codebase. (Django 1.3, South 0.7.3)

    Read the article

  • Any way to separate unit tests from integration tests in VS2008?

    - by AngryHacker
    I have a project full of tests, unit and integration alike. Integration tests require that a pretty large database be present, so it's difficult to make it a part of the build process simply because of the time that it takes to re-initialize the database. Is there a way to somehow separate unit tests from integration tests and have the build server just run the unit tests? I see that there is an Ordered Unit test in VS2008, which allows you to pick and choose tests, but I can't make it just execute alone, without all the others. Is there a trick that I am missing? Or perhaps I could adorn the unit tests with an attribute? What are some of the approaches people are using? P.S. I know I could use mocking for integration tests (just to make them go faster) but then it wouldn't be a true integration test.

    Read the article

  • Count query with 3 coloumn iin SQL

    - by asher baig
    I have one database Library with table named called Medien. Having multiple columns named as Fname,Mname,Lname and ISBN. I want to calculate database records with ISBN and without ISBN? I have execute following command Select COUNT(ISBN) as Verf1 FROM library.MEDIEN where verf1 = isbn; Select COUNT(ISBN) as Verf2 FROM library.MEDIEN where verf2 = isbn; Select COUNT(ISBN) as Verf3 FROM library.MEDIEN where verf3 = isbn; Select COUNT(ISBN) as Ntverf1 FROM library.MEDIENwhere verf1 != isbn; Select COUNT(ISBN) as Ntverf2 FROM library.MEDIENwhere verf2 != isbn; Select COUNT(ISBN) as Ntverf3 FROM library.MEDIENwhere verf3 != isbn; I am not sure i execute correct command or not. Because some ISBN records have Fname,Mname or Fname,Lname or Mname,Lname or Fname , Lname,Mname only respectively. Please kindly help me solving this query

    Read the article

  • Java framework "suggestion" for persisting the results from an Oracle 9i stored procedure using Apac

    - by chocksaway
    Hello, I am developing a Java servlet which calls an Oracle stored procedure. The stored procedure is likely to "grow" over time, and I have concerns the amount of time taken to "display the results on a web page". While I am at the implementation stage, I would like some suggestions of a Persistence framework which will work on Apache Tomcat 5.5? I see two approaches to persisting the database results. A scheduled database query every N minutes, or something which utilises triggers. Hibernate seems like the obvious answer, but I have never called stored procedures from Hibernate (HQL and Criteria). Is there a more appropriate framework which can be used? Thank you. cheers Miles.

    Read the article

  • How to efficiently manage files on a filesystem in Java?

    - by Tuukka Mustonen
    I am creating a few JAX-WS endpoints, for which I want to save the received and sent messages for later inspection. To do this, I am planning to save the messages (XML files) into filesystem, in some sensible hierarchy. There will be hundreds, even thousands of files per day. I also need to store metadata for each file. I am considering to put the metadata (just a couple of fields) into database table, but the XML file content itself into files in a filesystem in order not to bloat the database with content data (that is seldomly read). Is there some simple library that helps me in saving, loading, deleting etc. the files? It's not that tricky to implement it myself, but I wonder if there are existing solutions? Just a simple library that already provides easy access to filesystem (preferrably over different operating systems). Or do I even need that, should I just go with raw/custom Java?

    Read the article

  • Rails 3.o MYSQL connection problem

    - by palani
    Hi I have installed RVM in my ubunut linux box and configured the Rails 3 app in that ... i can able to start app server... my problem is when i invoke http://localhost:3000 . i getting the follwing error Mysql::Error (Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)): I checked mysqld service is running well. I checked my database.yml file .... the defined well development: adapter: mysql encoding: utf8 reconnect: false database: test_development username: root password: admin socket: /var/run/mysqld/mysqld.sock my installed mysql gem version is 2.8.1.... I really don't know what is the problem here....

    Read the article

  • MKMap showing detail of annotations

    - by yeohchan
    I have encountered a problem of populating the description for each annotation. Each annotation works, but there is somehow an area when trying to click on it. Here is the code. the one in bold is the one that has the problem. -(void)viewDidLoad{ FlickrFetcher *fetcher=[FlickrFetcher sharedInstance]; NSArray *rec=[fetcher recentGeoTaggedPhotos]; for(NSDictionary *dic in rec){ NSLog(@"%@",dic); NSDictionary *string = [fetcher locationForPhotoID:[dic objectForKey:@"id"]]; double latitude = [[string objectForKey:@"latitude"] doubleValue]; double longitude = [[string objectForKey:@"longitude"]doubleValue]; if(latitude !=0 && latitude != 0 ){ CLLocationCoordinate2D coordinate1 = { latitude,longitude }; **NSDictionary *adress=[NSDictionary dictionaryWithObjectsAndKeys:[dic objectForKey:@"owner"],[dic objectForKey:@"title"],nil];** MKPlacemark *anArea=[[MKPlacemark alloc]initWithCoordinate:coordinate1 addressDictionary:adress]; [mapView addAnnotation:anArea]; } } } Here is what the Flickr class does: #import <Foundation/Foundation.h> #define TEST_HIGH_NETWORK_LATENCY 0 typedef enum { FlickrFetcherPhotoFormatSquare, FlickrFetcherPhotoFormatLarge } FlickrFetcherPhotoFormat; @interface FlickrFetcher : NSObject { NSManagedObjectModel *managedObjectModel; NSManagedObjectContext *managedObjectContext; NSPersistentStoreCoordinator *persistentStoreCoordinator; } // Returns the 'singleton' instance of this class + (id)sharedInstance; // // Local Database Access // // Checks to see if any database exists on disk - (BOOL)databaseExists; // Returns the NSManagedObjectContext for inserting and fetching objects into the store - (NSManagedObjectContext *)managedObjectContext; // Returns an array of objects already in the database for the given Entity Name and Predicate - (NSArray *)fetchManagedObjectsForEntity:(NSString*)entityName withPredicate:(NSPredicate*)predicate; // Returns an NSFetchedResultsController for a given Entity Name and Predicate - (NSFetchedResultsController *)fetchedResultsControllerForEntity:(NSString*)entityName withPredicate:(NSPredicate*)predicate; // // Flickr API access // NOTE: these are blocking methods that wrap the Flickr API and wait on the results of a network request // // Returns an array of Flickr photo information for photos with the given tag - (NSArray *)photosForUser:(NSString *)username; // Returns an array of the most recent geo-tagged photos - (NSArray *)recentGeoTaggedPhotos; // Returns a dictionary of user info for a given user ID. individual photos contain a user ID keyed as "owner" - (NSString *)usernameForUserID:(NSString *)userID; // Returns the photo for a given server, id and secret - (NSData *)dataForPhotoID:(NSString *)photoID fromFarm:(NSString *)farm onServer:(NSString *)server withSecret:(NSString *)secret inFormat:(FlickrFetcherPhotoFormat)format; // Returns a dictionary containing the latitue and longitude where the photo was taken (among other information) - (NSDictionary *)locationForPhotoID:(NSString *)photoID; @end

    Read the article

  • Types in Python - Google Appengine

    - by Chris M
    Getting a bit peeved now; I have a model and a class thats just storing a get request in the database; basic tracking. class SearchRec(db.Model): WebSite = db.StringProperty()#required=True WebPage = db.StringProperty() CountryNM = db.StringProperty() PrefMailing = db.BooleanProperty() DateStamp = db.DateTimeProperty(auto_now_add=True) IP = db.StringProperty() class AddSearch(webapp.RequestHandler): def get(self): searchRec = SearchRec() searchRec.WebSite = self.request.get('WEBSITE') searchRec.WebPage = self.request.get('WEBPAGE') searchRec.CountryNM = self.request.get('COUNTRY') searchRec.PrefMailing = bool(self.request.get('MAIL')) searchRec.IP = self.request.get('IP') Bool has my biscuit; I thought that setting bool(self.reque....) would set the type of the string but no matter what I pass it it still stores it as TRUE in the database. I had the same issue with using required=True on strings for the model; the damn thing kept saying that nothing was being passed... but it had. Ta

    Read the article

  • VB working with SQL DB - end of row count, keeps looping

    - by Tramd
    I'm adding to a combo box an ID and a name that i'm pulling from a database. My problem is that for some reason my loop doesnt end once it reaches the end of the records in the database table. Here's my code: For intcount = 0 To dtOrders.Rows.Count - 1 cmbSearch.Items.Add(dtOrders.Rows(intcount)("EmployeeID").ToString & " " & dtOrders.Rows(intcount)("EmployeeLastName").ToString & ", " & dtOrders.Rows(intcount)("EmployeeFirstName").ToString) Next Shouldnt the .rows.count - 1 stop it once it reaches the last record? It loops 4 times through.

    Read the article

  • SQL Server 2008 - Script Data as Insert Statements from SSIS Package

    - by Brandon King
    SQL Server 2008 provides the ability to script data as Insert statements using the Generate Scripts option in Management Studio. Is it possible to access the same functionality from within a SSIS package? Here's what I'm trying to accomplish... I have a scheduled job that nightly scripts out all the schema and data for a SQL Server 2008 database and then uses the script to create a "mirror copy" SQLCE 3.5 database. I've been using Narayana Vyas Kondreddi's sp_generate_inserts stored procedure to accomplish this, but it has problems with some datatypes and greater-than-4K columns (holdovers from SQL Server 2000 days). The Script Data function looks like it could solve my problems, if only I could automate it. Any suggestions?

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >