Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 197/619 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • NHibernate and hbm2dll update attribute

    - by michael lucas
    Hi, i'm using NHibernate with Sdf database. In my hibernate.cfg.xml file i've set: <property name="hbm2ddl.auto" value="update"/> But this does not seem to work at all. "Update" attribute should make NHibernate generate missing tables and columns during application launch, but it does not happen. If i want missing tables geenrated I have to set hbm2dll.auto property to "create" which is not an option for me since it drops existing db content beforehand. I experienced the same problem with PostgreSql problem. Am I missing something?

    Read the article

  • Printing JTables without formatting of the original component

    - by EricR
    I'm writing an application which utilises tables which can be printed if the user so desires and I wish to print a JTable filled with data, except I haven't been able to find an option to remove the formatting; the printed tables looks like it does in the GUI (based on the system theme) which is making the table less readable and using excess ink. I wish to print the same data with clear formatting. Is there a way to do this straight from a JTable or is my best option simply to print to a file and have the use printer from there. Currently it functions through a viewer which gives the user some options for printing, and then it goes to the system's printer.

    Read the article

  • Coldfusion returning typed objects / AMF remoting

    - by Chin
    Is the same possible in ColdFusion? Currently I am using .Net/Fluorine to return objects to the client. Whilst in testing I like to pass strings representing the select statement and the custom object I wish to have returned from my service. Fluorine has a class ASObject to which you can set the var 'typeName'; which works great. I am hoping that this is possible in Coldfusion. Does anyone know whether you can set the type of the returned object in a similar way. This is especially helpful with large collections as the flash player will convert them to a local object of the same name thus saving interating over the collection to convert the objects to a particular custom object. foreach (DataRow row in ds.Tables[0].Rows) { ASObject obj = new ASObject(); foreach (DataColumn col in ds.Tables[0].Columns) { obj.Add(col.ColumnName, row[col.ColumnName]); } obj.TypeName = pObjType; al.Add(obj); } Many thanks,

    Read the article

  • Little Employee/Shift timetable HELP!!!

    - by DAVID
    Morning Guys, I have the following tables: operator(ope_id, ope_name) ope_shift(ope_id, shift_id, shift_date) shift(shift_id, shift_start, shift_end) here is a better view of the data http://latinunit.net/emp_shift.txt here is the screenshot of a select statement to the tables http://img256.imageshack.us/img256/4013/opeshift.jpg im using this code SELECT OPE_ID, COUNT(OPE_ID) AS Total_shifts from operator_shift group by ope_id; to view the current total shifts per operator and it works, BUT if there was 500 more rows it would count them all aswell, THE QUESTION is, anyone has a better way of making my database work, or how can i tell the system that those rows are a whole month, i remember i friend said something about count then devide by 30 but im not sure, what if the month isnt finished? and you want to show the emp with highest shifts to date

    Read the article

  • Automatic database generation / migration with perl

    - by pistacchio
    Hi, In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any perl module that does the same? Eg, it takes the YAML / XML / JSON description of a database as input and modifies / generates the database accordingly? Thanks in advance.

    Read the article

  • postfix: maildir access problem; did I lose mail?

    - by threecheeseopera
    I found a few hundred of the following errors in my syslog following some configuration changes: warning: maildir access problem for UID/GID=5000/5000: create maildir file /home/vmail/domain/account1 /tmp/1273028517.P14666.domain.com: Permission denied I made a temporary fix for now (the ole' standby, chmod 777), but I would like to know if this is a permanent error (lost mail) or if these are retried/requeued. Thanks!

    Read the article

  • what is a performance way to 'tree-walking' through my Entity Framework data

    - by Greg
    Hi, I have a Entity Framework design with a few tables that define a "graph". So there can be a large chain of relationships between objects in the few tables via concept of parent/child relationships. What is a performance way to 'tree-walking' through my Entity Framework data? That is I assume I wouldn't want to load the full set of all NODES and RELATIONSHIPS from the database for the purpose of walking the tree, where the end result may only be identifying leaf nodes? Or would this be OK with the way lazy loading may work at the column/parameter level? Else how could I load just the skeleton of the objects and then when needing to refer to any attributes have them lazy load then?

    Read the article

  • How to Sum calulated fields

    - by Nazero Jerry
    I‘d like to ask I question that here that I think would be easy to some people. Ok I have query that return records of two related tables. (One to many) In this query I have about 3 to 4 calculated fields that are based on the fields from the 2 tables. Now I want to have a group by clause for names and sum clause to sum the calculated fields but it ends up in error message saying: “You tried to execute a query that is not part of aggregate function” So I decided to just run the query without the totals *(ie no group by , sum etc,,,) : And then I created another query that totals my previous query. ( i.e. using group by clause for names and sum for calculated fields… no calculation here) This is fine ( I use to do this) but I don’t like having two queries just to get summary total. Is their any other way of doing this in the design view and create only one query?. I would very much appreciate. Thankyou: JM

    Read the article

  • How to implement Auto_Increment per User, on the same table?

    - by Jonas
    I would like to have multiple users that share the same tables in the database, but have one auto_increment value per user. I will use an embedded database, JavaDB and as what I know it doesn't support this functionality. How can I implement it? Should I implement a trigger on inserts that lookup the users last inserted row, and then add one, or are there any better alternative? Or is it better to implement this in the application code? Or is this just a bad idea? I think this is easier to maintain than creating new tables for every user. Example: table +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 1 | 1 | 2 | 3454 | | 2 | 2 | 2 | 6567 | | 3 | 1 | 3 | 6788 | | 4 | 3 | 2 | 1133 | | 5 | 4 | 2 | 4534 | | 6 | 2 | 3 | 4366 | | 7 | 3 | 3 | 7887 | +----+-------------+---------+------+ SELECT * FROM table WHERE USER_ID = 3 +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 3 | 1 | 3 | 6788 | | 6 | 2 | 3 | 4366 | | 7 | 3 | 3 | 7887 | +----+-------------+---------+------+ SELECT * FROM table WHERE USER_ID = 2 +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 1 | 1 | 2 | 3454 | | 2 | 2 | 2 | 6567 | | 4 | 3 | 2 | 1133 | | 5 | 4 | 2 | 4534 | +----+-------------+---------+------+

    Read the article

  • How to retrieve MYSQL records as an INSERT statement.

    - by Aglystas
    I'm trying come up with the best method of synchronizing particular rows of 2 different database tables. So, for example there's 2 product tables in different databases as such... Origin Database product{ merchant_id, product_id, ... additional fields } Destination Database product{ merchant_id product_id ... additional fields } So, the database schema is the same for both. However I'm looking to select records with a particular merchant_id, remove all records from the destination table that have that merchant_id and replace those records with records from the origin database of the same merchant_id. My first thought was using mysqldump, parsing out the create table statements, and only running the Insert Statements. Seems like a pain though. So I was wondering if there is a better technique to do this. I would think mysql has some method of creating INSERT statements as output from a SELECT statement, so you can define how to insert specific record information into a new db. Any help would be appreciated, thank you much.

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • How do I correctly use two Not Exists statements in a where clause using Access SQL VBA?

    - by Bryan
    I have 3 Tables: NotHeard,analyzed,analyzed2. In each of these tables I have two columns named UnitID and Address. What I'm trying to do right now is to select all of the records for the columns UnitID and Address from NotHeard that don't appear in either analyzed or analyzed2. The SQL statement I created was as follows: SELECT UnitID, Address INTO [NotHeardByEither] FROM [NotHeard] Where NOT EXISTS( Select analyzed.UnitID FROM analyzed WHERE [NotHeard].UnitID = analyzed.UnitID) or NOT EXISTS( Select analyzed2.UnitID FROM analyzed2 WHERE [NotHeard].UnitID = analyzed2.UnitID) Group BY UnitID, Address I thought this would work since I've used the single NOT EXISTS subquery line and it has worked just fine for me in the past. The above query however returns the same data that is in the NotHeard table whereas if I take out the or NOT EXISTS part it works correctly. Any ideas as to what I'm doing wrong or how to do what I'm wanting to do?

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • In SQL Server merge replication, how does reinitializing work?

    - by Craig Shearer
    I have set up a pull subscription to a merge publication in SQL Server. I use parameterized row filters on some tables. This works fine with the initial synchronization - just the rows using the filter arrive in the replicated (client) database. However, at some later point I'd like to be able to synchronize the replicated database again from the server and have new rows that match the parameterized row filters appear on the client database. The doucmentation seems to indicate that I can call Reinitialize() to do this. However, when I do try this and Synchronize again, I get an error saying that the script 'snapshot.pre' cannot be applied to the database. I've inspected the script and can see why - it's trying to drop some functions are used by the tables in the database. It would appear that for Reinitialize() to work it requires that the database be blank. Am I misunderstanding something here? Is there a way to make this work?

    Read the article

  • Entity framework and database logic.

    - by Xavier Devian
    Hi all, i have a question that's being around for several years. As all you know entity framework is an ORM tool that tries to model the database to an object oriented access model. All the samples I've seen are quering directly to the database tables. So, which is the role of the views in the database now?. The views were used to model the database in a more friendly way, that is, several physical tables, one logic table. This was great for example in hidding the complex relational model on stored procedures as queryng the views inside them was much easier than reproducing the query joins over and over on each stored procedure. So the question is, why is entity framework so good if stored procedures can not take benefit of it?

    Read the article

  • To join or not to join - database structure

    - by Industrial
    Hi! We're drawing up the database structure with the help of mySQL Workbench for a new app and the number of joins required to make a listing of the data is increasing drastically as the many-to-many relationships increases. The questions: Is it really that bad to merge tables where needed and thereby reducing joins? Is there a better way then pivot tables to take care of many-to-many relationships? We discussed about instead storing all data in serialized text columns and having the application make the sorting instead of the database, but this seems like a very bad idea, even though that the database will be heavily cached. What do you think? Thanks!

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • Windows 64bit Sandboxing software alternatives

    - by Pacifika
    As you might know sandboxing software doesn't work in 64bit Windows due to patchguard. What are the alternatives for a person looking to test untrusted / temporary software? Edit: @Nick I'd prefer an alternative to VMs as I'm not happy with the extended startup time, the extra login sequences and the memory overhead that accompanies booting a VM solution to test something out ocassionally as a home user. Also it's another system that needs to be kept secure and up to date.

    Read the article

  • Create a view like Tweetie's User Profile view

    - by Graeme
    Hi, I'm just wondering if anyone has any idea on how you can create a view that looks like the user profile view in apps like Tweetie, where there are seemingly multiple tables with a couple of normal (straight up and down tables) and then two rows of six cells, which in Tweeties case have the number of followers, following etc. I'm trying to make a similar view for my app, but can't seem to find out the best way to create it. Any tutorials, advice etc. would be appreciated. Thanks. P.S. Here's a picture of the view which I'm trying to recreate.

    Read the article

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

  • Write vim file as super-user ?

    - by zimbatm
    This is a usability problem that happens often to me : I open a read-only system file with vim, even editing it, because I'm not attentive enough, or because the vim on the system is badly configured. Once my changes are done, I'm stuck having to write them in a temporary file or loosing them, because :w! won't work. Is there a vim command (:W!!!) that allows you to write the current buffer as a super-user ? (Vim would ask for your sudo or su password naturally)

    Read the article

  • Display another field in the referenced table for multiple columns with performance issues in mind

    - by israkir
    I have a table of edge like this: ------------------------------- | id | arg1 | relation | arg2 | ------------------------------- | 1 | 1 | 3 | 4 | ------------------------------- | 2 | 2 | 6 | 5 | ------------------------------- where arg1, relation and arg2 reference to the ids of objects in another object table: -------------------- | id | object_name | -------------------- | 1 | book | -------------------- | 2 | pen | -------------------- | 3 | on | -------------------- | 4 | table | -------------------- | 5 | bag | -------------------- | 6 | in | -------------------- What I want to do is that, considering performance issues (a very big table more than 50 million of entries) display the object_name for each edge entry rather than id such as: --------------------------- | arg1 | relation | arg2 | --------------------------- | book | on | table | --------------------------- | pen | in | bag | --------------------------- What is the best select query to do this? Also, I am open to suggestions for optimizing the query - adding more index on the tables etc... EDIT: Based on the comments below: 1) @Craig Ringer: PostgreSQL version: 8.4.13 and only index is id for both tables. 2) @andrefsp: edge is almost x2 times bigger than object.

    Read the article

  • Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py

    - by crgwbr
    I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). But, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name. Any idea what could be causing this? Also, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.

    Read the article

  • Where are the really high quality and complex Swing components?

    - by jouhni
    Looking at Swing, I have the feeling that it comes with many useful and reasonable atomic components in its core. And when I look at the Web there are hundrets of quickly plugged together components (among them many date/time pickers, pimped lists and tables), which have in common that I could easily write them on my own, if I needed them. When I build big software and come to the point where I need a domain-specific component which is really big, I mostly come to the point where I have to write it on my own, which, due to the point that they are not just plugged together lists and tables, isn't done qickly. So, the question is, why are there no Swing component galleries which contain more than just customized date/time pickers or lists with added tree support. Where are the components which really raise the level of abstraction, or are in best case domain-specific?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >