Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 502/858 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • Creating a Linq->HQL provider

    - by Mike Q
    Hi all, I have a client application that connects to a server. The server uses hibernate for persistence and querying so it has a set of annotated hibernate objects for persistence. The client sends HQL queries to the server and gets responses back. The client has an auto-generated set of objects that match the server hibernate objects for query results and basic persistence. I would like to support using Linq to query as well as Hql as it makes the queries typesafe and quicker to build (no more typos in HQL string queries). I've looked around at the following but I can't see how to get them to fit with what I have. NHibernate's Linq provider - requires using NHibernate ISession and ISessionFactory, which I don't have LinqExtender - requires a lot of annotations on the objects and extending a base type, too invasive What I really want is something that will generate give me a nice easy to process structure to build the HQL queries from. I've read most of a 15 page article written by one of the C# developers on how to create custom providers and it's pretty fraught, mainly because of the complexity of the expression tree. Can anyone suggest an approach for implementing Linq - HQL translation? Perhaps a library that will the cleanup of the expression tree into something more SQL/HQLish. I would like to support select/from/where/group by/order by/joins. Not too worried about subqueries.

    Read the article

  • Getting random record from database with group by

    - by Saif Bechan
    Hello i have a question on picking random entries from a database. I have 4 tables, products, bids and autobids, and users. Products ------- id 20,21,22,23,24(prime_key) price........... etc........... users ------- id(prim_key) name user1,user2,user3 etc bids ------- product_id user_id created autobids -------- user_id product_id Now a multiple users can have an autobid on an product. So for the next bidder I want to select a random user from the autobid table example of the query in language: for each product in the autobid table I want a random user, which is not the last bidder. On product 20 has user1,user2,user3 an autobidding. On product 21 has user1,user2,user3 an autobidding Then I want a resultset that looks for example like this 20 – user2 21 – user3 Just a random user. I tried miximg the GOUP BY (product_id) and making it RAND(), but I just can't get the right values from it. Now I am getting a random user, but all the values that go with it don't match. Can someone please help me construct this query, I am using php and mysql

    Read the article

  • LINQ-SQL Updating Multiple Rows in a single transaction

    - by RPM1984
    Hi guys, I need help re-factoring this legacy LINQ-SQL code which is generating around 100 update statements. I'll keep playing around with the best solution, but would appreciate some ideas/past experience with this issue. Here's my code: List<Foo> foos; int userId = 123; using (DataClassesDataContext db = new FooDatabase()) { foos = (from f in db.FooBars where f.UserId = userId select f).ToList(); foreach (FooBar fooBar in foos) { fooBar.IsFoo = false; } db.SubmitChanges() } Essentially i want to update the IsFoo field to false for all records that have a particular UserId value. Whats happening is the .ToList() is firing off a query to get all the FooBars for a particular user, then for each Foo object, its executing an UPDATE statement updating the IsFoo property. Can the above code be re-factored to one single UPDATE statement? Ideally, the only SQL i want fired is the below: UPDATE FooBars SET IsFoo = FALSE WHERE UserId = 123 EDIT Ok so looks like it cant be done without using db.ExecuteCommand. Grr...! What i'll probably end up doing is creating another extension method for the DLINQ namespace. Still require some hardcoding (ie writing "WHERE" and "UPDATE"), but at least it hides most of the implementation details away from the actual LINQ query syntax.

    Read the article

  • PHP & MySQL Image deletion problem?

    - by IMAGE
    I have this script that deletes a users avatar image that is stored in a folder name thumbs and another named images and the image name is stored in a mysql database. But for some reason the script deletes all the users info for example if the users id is 3 all of the users info like first name last name age and so are deleted as well, basically everything is deleted including the user how do I fix this so only the images and image name is deleted? Here is the code. $user_id = '3'; if (isset($_POST['delete_image'])) { $a = "SELECT * FROM users WHERE avatar = '". $avatar ."' AND user_id = '". $user_id ."'"; $r = mysqli_query ($mysqli, $a) or trigger_error("Query: $a\n<br />MySQL Error: " . mysqli_error($mysqli)); if ($r == TRUE) { unlink("../members/" . $user_id . "/images/" . $avatar); unlink("../members/" . $user_id . "/images/thumbs/" . $avatar); $a = "DELETE FROM users WHERE avatar = '". $avatar ."' AND user_id = '". $user_id ."'"; $r = mysqli_query ($mysqli, $a) or trigger_error("Query: $a\n<br />MySQL Error: " . mysqli_error($mysqli)); } }

    Read the article

  • Looking for internet traffic manager software for Windows 7

    - by Semyon Perepelitsa
    I have 128 KB/s (1 Mbit/s) internet connection. Not quite fast for downloading big files, but it is okay for internet browsing. When I start big file dowload, I cannot surf the web normally: pages appears slowly, especially pictures there. Therefore I pause downloading, but I'm regularly spending much time on reading articles or leaving the computer alone, when I can allow the file being dowloaded. Is there any software that can automize the process of pausing and resuming download or simply automatically regulate internet speed for each application using Internet? I download files using different software and protocols (Download Master for http, ftp; uTorrent for torrents; many other programs for updating themselves), so I don't want to be tied to particular program for downloading.

    Read the article

  • php paging and the use of limit clause

    - by Average Joe
    Imagine you got a 1m record table and you want to limit the search results down to say 10,000 and not more than that. So what do I use for that? Well, the answer is use the limit clause. example select recid from mytable order by recid asc limit 10000 This is going to give me the last 10,000 records entered into this table. So far no paging. But the limit phrase is already in use. That brings to question to the next level. What if I want to page thru this record particular record set 100 recs at a time? Since the limit phrase is already part of the original query, how do I use it again, this time to take care of the paging? If the org. query did not have a limit clause to begin with, I'd adjust it as limit 0,100 and then adjusting it as 100,100 and then 200,100 and so on while the paging takes it course. But at this time, I cannot. You almost think you'd want to use two limit phrases one after the other - which is not not gonna work. limit 10000 limit 0,1000 for sure it would error out. So what's the solution in this particular case?

    Read the article

  • Hard Drive upgrade advice for a Dell PowerEdge 1950

    - by user8185
    My setup is a Dell PowerEdge 1950 with 2x 140Gb Hard drives in a RAID 1 configuration. The OS is Windows 2003 Web Edition and disk is partitioned into to two, a 12Gb C: partition and the remainder is the D: drive. Both are very close to full capacity. Ideally I want to replace those drives with 2x 1Tb drives while retaining all data. First of all is this possible without rebuilding the server? If so, will I need any 3rd party software, Symantec Ghost, Partition Master for e.g., to do this? Any general advice as to how to go about doing this?

    Read the article

  • sql server mdf file database attachment

    - by jnsohnumr
    hello all i'm having a bear of a time getting visual studio 2010 (ultimate i think) to properly attach to my database. it was moved from original spot to #MYAPP#/#MYAPP#.Web/App_Data/#MDF_FILE#.mdf. I have three instances of sql server running on this machine. i have tried to replace the old mdf file with my new one and cannot get the connectionstring right for it. what i'm really wanting to do is to just open some DB instance, run a DB create script. Then I can have a DB that was generated via my edmx (generate database from model) in silverlight business application (c#) right now, when i go to server explorer in VS, choose add new connection, choose MS SQL Server Database FIle (SqlClient), choose my file location (app_data directory), use windows authentication, and hit the Test Connection button I get the following error: Unable to open the physical file "". Operating system error 5: "5(Access Denied.)". An attempt to attach to an auto-named database for file"" failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. The mdf file was created on the same machine by connecting to (local) on the sql server management studio, getting a new query, pasting in the SQL from the generated ddl file, adding a CREATE DATABASE [NcrCarDatabase]; GO before the pasted SQL, and executing the query. I then disconnected from the DB in management studio, closed management studio, navigated to the DATA directory for that instance, and copying the mdf and ldf files to my application's app_data folder. I am then trying to connect to the same file inside visual studio. I hope that gives more clarity to my problems :). Connection string is: Data Source=.\SQLEXPRESS;AttachDbFilename=C:\SourceCode\NcrCarDatabase\NcrCarDatabase.Web\App_Data\NcrCarDatabase.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True

    Read the article

  • PDO using singleton stored as class properity

    - by Misiur
    Hi again. OOP drives me crazy. I can't move PDO to work. Here's my DB class: class DB extends PDO { public function &instance($dsn, $username = null, $password = null, $driver_options = array()) { static $instance = null; if($instance === null) { try { $instance = new self($dsn, $username, $password, $driver_options); } catch(PDOException $e) { throw new DBException($e->getMessage()); } } return $instance; } } It's okay when i do something like this: try { $db = new DB(DB_TYPE.':host='.DB_HOST.';dbname='.DB_NAME, DB_USER, DB_PASS); } catch(DBException $e) { echo $e->getMessage(); } But this: try { $db = DB::instance(DB_TYPE.':host='.DB_HOST.';dbname='.DB_NAME, DB_USER, DB_PASS); } catch(DBException $e) { echo $e->getMessage(); } Does nothing. I mean, even when I use wrong password/username, I don't get any exception. Second thing - I have class which is "heart" of my site: class Core { static private $instance; public $db; public function __construct() { if(!self::$instance) { $this->db = DB::instance(DB_TYPE.':hpost='.DB_HOST.';dbname='.DB_NAME, DB_USER, DB_PASS); } return self::$instance; } private function __clone() { } } I've tried to use "new DB" inside class, but this: $r = $core->db->query("SELECT * FROM me_config"); print_r($r->fetch()); Return nothing. $sql = "SELECT * FROM me_config"; print_r($core->db->query($sql)); I get: PDOStatement Object ( [queryString] => SELECT * FROM me_config ) I'm really confused, what am I doing wrong?

    Read the article

  • UTF-8 MySQL and Charset, pls help me understand this once and for all!

    - by FFish
    Can someone explain me when I set everything to UTF-8 I keep getting those damn ??? MySQL Server version: 5.1.44 MySQL charset: UTF-8 Unicode (utf8) I create a new database name: utf8test collation: utf8_general_ci MySQL connection collation: utf8_general_ci My SQL looks like this: SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; CREATE TABLE IF NOT EXISTS `test_table` ( `test_id` int(11) NOT NULL, `test_text` text NOT NULL, PRIMARY KEY (`test_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; INSERT INTO `test_table` (`test_id`, `test_text`) VALUES (1, 'hééélo'), (2, 'wööörld'); My PHP / HTML: <?php $db_conn = mysql_connect("localhost", "root", "") or die("Can't connect to db"); mysql_select_db("utf8test", $db_conn) or die("Can't select db"); // $result = mysql_query("set names 'utf8'"); // this works... why?? $query = "SELECT * FROM test_table"; $result = mysql_query($query); $output = ""; while($row = mysql_fetch_assoc($result)) { $output .= "id: " . $row['test_id'] . " - text: " . $row['test_text'] . "<br />"; } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html lang="it" xmlns="http://www.w3.org/1999/xhtml" xml:lang="it"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>UTF-8 test</title> </head> <body> <?php echo $output; ?> </body> </html>

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • How to run django on localhost with nginx and uwsgi?

    - by user2426362
    How to run django on localhost with nginx and uwsgi? This im my config but not works. nginx: server { listen 80; server_name localhost; access_log /var/log/nginx/localhost_access.log; error_log /var/log/nginx/localhost_error.log; location / { uwsgi_pass unix:///tmp/localhost.sock; include uwsgi_params; } location /media/ { alias /home/user/projects/zt/myproject/myproject/media/; } location /static/ { alias /home/user/projects/zt/myproject/myproject/static/; } } uwsgi: [uwsgi] vhost = true plugins = python socket = /tmp/localhost.sock master = true enable-threads = true processes = 2 wsgi-file = /home/user/projects/zt/myproject/myproject/wsgi.py virtualenv = /home/user/projects/zt chdir = /home/user/projects/zt/myproject touch-reload = /home/user/projects/zt/myproject/reload This config work on my ubuntu server with normal domain (not localhost) but on localhost not working. If I run localhost in web browser I have Welcome to nginx!

    Read the article

  • Access 2007 not allowing user to delete record in subform

    - by Todd McDermid
    Good day... The root of my issue is that there's no context menu allowing the user to delete a row from a form. The "delete" button on the ribbon is also disabled. In Access 2003, apparently this function was available, but since our recent "upgrade" to 2007 (file is still in MDB format) it's no longer there. Please keep in mind I'm not an Access dev, nor did I create this app - I inherited support for it. ;) Now for the details, and what I've tried. The form in question is a subform on a larger form. I've tried turning "AllowDeletes" on on both forms. I've checked the toolbar and ribbon properties on the forms to see if they loaded some custom stuff, but no. I've tried changing the "record locks" to "on edit", no joy. I examined the query to see if it was "too complicated" to permit a delete - as far as I can tell, it's a very simple two (linked) table join. Compared to another form in this app that does permit row deletes, it has a much more complicated (multi-join, built on queries) query. Is there a resource that would describe the required conditions for allowing deletes? Thanks in advance...

    Read the article

  • Why aren't my MySQL Group By Hours vs Half Hours files Not displaying same data?

    - by stogdilla
    I need to be able to display data that I have in 15 minute increments in different display types. I have two queries that are giving me trouble. One shows data by half an hour, the other shows data by hour. The only issue is that the data totals change between queries. It's not counting the data that happens between the time frames, only AT the time frames. Ex: There are 5 things that happen at 7:15am. 2 that happen at 7:30am and 4 that show at 7:00am. The 15 minute view displays all of the data. The half hour view displays the data from 7:00am and from 7:30am but ignores the 7:15am. The hour display only shows the 7:00am data Here are my queries: $query="SELECT * FROM data WHERE startDate='$startDate' and queue='$queue' GROUP BY HOUR(start),floor(minute(start)/30)"; and $query="SELECT * FROM data WHERE startDate='$startDate' and queue='$queue' GROUP BY HOUR(start) "; How can I pull out the data in groups like I have but get all the data included? Is the issue the way the data is stored in the mysql table? Currently I have a column with dates (2010-03-29) and a column with times (00:00) Do I need to convert these into something else?

    Read the article

  • Refreshing a JPA result list which is bound with a jTable

    - by exhuma
    First off, I hage to write this on my mobile. But I'll try to format it properly. In Netbeans I created a jTable and bound it's values to a JPA result set. This works great. The qery contains a param which i set in the pre-create box of the "query result" component. So just before netbeans creates the query result i write this: myQuery.setParameter("year", "1997"); This works fine. Now, I have an event handler which is supposed to change the param and display the new values in the table. So I do this: myQuery.setParameter("year", "2005"); myResultList.clear(); myResultList.addAll(myQuery.getResultList()); jTable1.updateUI(); This works, but it feels wrong to me. Note: the result set is bound to the table. So I was hoping there was something like this: myQuery.setParameter("year", "2005"); myResultList.refresh(); Is there something like this?

    Read the article

  • CFfile -- value is not set to the queried data

    - by user494901
    I have this add user form, it also doubles as a edit user form by querying the data and setting the value="#query.xvalue#". If the user exists (eg, you're editing a user, it loads in the users data from the database. When doing this on the <cffile field it does not load in the data, then when the insert goes to insert data it overrights the database values with a blank string (If a user does not input a new file). How do I avoid this? Code: Form: <br/>Digital Copy<br/> <!--- If null, set a default if not, set the default to database default ---> <cfif len(Trim(certificationsList.cprAdultImage)) EQ 0> <cfinput type="file" required="no" name="cprAdultImage" value="" > <cfelse> File Exists: <cfoutput><a href="#certificationsList.cprAdultImage#">View File</a></cfoutput> <cfinput type="file" required="no" name="cprAdultImage" value="#certificationsList.cprAdultImage#"> </cfif> Form Processor: <!--- Has a file been specificed? ---> <cfif not len(Trim(form.cprAdultImage)) EQ 0> <cffile action="upload" filefield="cprAdultImage" destination="#destination#" nameConflict="makeUnique"> <cfinvokeargument name="cprAdultImage" value="#pathOfFile##cffile.serverFile#"> <cfelse> <cfinvokeargument name="cprAdultImage" value=""> </cfif> CFC ARGS: <cfargument name="cprAdultExp" required="NO"> <cfargument name="cprAdultCompany" type="string" required="no"> <cfargument name="cprAdultImage" type="string" required="no"> <cfargument name="cprAdultOnFile" type="boolean" required="no"> Query: UPDATE mod_StudentCertifications SET cprAdultExp='#DateFormat(ARGUMENTS.cprAdultExp, "mm/dd/yyyy")#', cprAdultCompany='#Trim(ARGUMENTS.cprAdultCompany)#', cprAdultImage='#Trim(ARGUMENTS.cprAdultImage)#', cprAdultOnFile='#Trim(ARGUMENTS.cprAdultOnFile)#' INSERT INTO mod_StudentCertifications( cprAdultExp, cprAdultcompany, cprAdultImage, cprAdultOnFile

    Read the article

  • Can't install Ubuntu 12 into VirtualBox (USB not recognized, ISO would not boot)

    - by wvxvw
    I'm trying out VirtualBox 4.18 and wanted to install Ubuntu 12 as a test. After installing VirtualBox (on Debian squeeze/sid), creating a virtual machine for Ubuntu and pointing it in Settings Storage IDE Controllers to the ISO with the proper version of Ubuntu, checked the "Live CD" option. Tried to define the IDE as master / slave, primary / secondary - all to no effect, and trying to boot this system, I'm getting to the screen which says: FATAL: could not read from the boot medium! System halted I've copied the same ISO to the USB stick, and I can boot from the USB (outside VirtualBox). I've looked at couple of tutorials / walk-through, there's nothing I can see that I would've done wrong. So, how would I configure it to boot from the desired ISO? Below is the snapshot with the current settings (sorry, I don't know how to get them as text).

    Read the article

  • How to eliminate duplicate rows?

    - by Odette
    hi guys im back with my original query and i just have one question please (ps: I know i have to vote and regsiter and I promise I will do that today) With the following query (t-sql) I am getting the correct results, except that there are duplicates now. I have been reading up and think I can use the PARTITION BY syntax - can you please show me how to incorporate the PARTITION BY syntax? WITH CALC1 AS (SELECT OTQUOT, OTIT01 AS ITEMS, ROUND(OQCQ01 * OVRC01,2) AS COST FROM @[email protected] WHERE OTIT01 < '' UNION ALL ... SELECT OTQUOT, OTIT10 AS ITEMS, ROUND(OQCQ10 * OVRC10,2) AS COST FROM @[email protected] WHERE OTIT10 < '' ) SELECT OTQUOT, DESC, ITEMS, RN FROM ( SELECT OTQUOT, ITEMS, B.IXRPGP AS GROUP, C.OTRDSC AS DESC, COST, ROW_NUMBER() OVER (PARTITION BY OTQUOT ORDER BY COST DESC) AS RN FROM CALC1 AS A INNER JOIN @[email protected] AS B ON (A.ITEMS = B.IKITMC) INNER JOIN DATAGRP.GDSGRP AS C ON (B.IXRPGP = C.OKRPGP) ) T RESULTS: 60408169 FENCING GNCPDCTP18BGBG 1 60408169 FENCING CGIFESHPD1795BG 2 60408169 FENCING GTTCGIBG 3 60408169 FENCING GBTCGIBG 4 How do I get rid of the duplicates? thanks Bill and all the others for your help (I am still learning!)

    Read the article

  • PHP-FPM runs PHP scripts as root

    - by fwalch
    I have a web server setup using nginx and PHP-FPM listening on a Unix socket. In my php-fpm.conf, I have specified user = www group = www When I run ps aux, I can see that the php-fpm worker processes run as www; the php-fpm master process runs as root. However, I noticed that PHP scripts are executed as root; at least that's the output of echo get_current_user(); What can I do to run scripts as the www user? How can this even happen if the worker processes run as www?

    Read the article

  • SQL Server 2005 - Building a WHERE clause

    - by user336786
    Hello, I have a stored procedure that is dynamically building a query. The where clause associated with this query is based on filter values selected by a user. No matter what I do though, the where clause does not seem to get set. -- Dynamically build the WHERE clause based on the filters DECLARE @whereClause as nvarchar(1024) IF (@hasSpouse > -1) BEGIN IF (@hasSpouse = 0) SET @whereClause='p.[HasSpouse]=0' ELSE SET @whereClause='(p.[HasSpouse]=1 OR p.[HasSpouse] IS NULL)' END -- Dynamically add the next filter if necessary IF (@isVegan > -1) BEGIN IF (LEN(@whereClause) > 0) BEGIN SET @whereClause = @whereClause + ' AND ' END IF (@isVegan = 0) SET @whereClause = @whereClause + 'c.[IsVegan]=0' ELSE SET @whereClause = @whereClause + '(c.[IsVegan]=1 OR c.[IsVegan] IS NULL)' END PRINT @whereClause The @whereClause never prints anything. In turn, the LEN(@whereClause) is always NULL. The @isVegan and @hasSpouse values are passed into the stored procedure. The values are what I expected. What am I doing wrong? Why is the @whereClause never being set? Thank you for your help! Thank you!

    Read the article

  • How to filter Many2Many / Generic Relations properly with Q?

    - by HWM-Rocker
    Hi, I have 3 Models, the TaggedObject has a GenericRelation with the ObjectTagBridge. And the ObjectTagBridge has a ForeignKey to the Tag Model. class TaggedObject(models.Model): """ class that represent a tagged object """ tags = generic.GenericRelation('ObjectTagBridge', blank=True, null=True) class ObjectTagBridge(models.Model): """ Help to connect a generic object to a Tag. """ # pylint: disable-msg=W0232,R0903 content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') tag = models.ForeignKey('Tag') class Tag(models.Model): ... when I am attaching a Tag to an Object, I am creating a new ObjectTagBridge and set its ForeignKey tag to the Tag I want to attach. That is working fine, and I can get all Tags that I attached to my Object very easy. But when I want to get (filter) all Objects that have Tag1 and Tag2 I tried to something like this: query = Q(tags__tag=Tag1) & Q(tags__tag=Tag2) object_list = TaggedObjects.filter(query) but now my object_list is empty, because it is looking for TaggedObjects that have one ObjectTagBridge with 2 tag objects, the first with Tag1 and the second with Tag2. I my application will be more complex Q queries than this one, so I think I need a solution with this Q object. In fact any combination of binary conjunctions, like: (...) and ( (...) or not(...)) How can I filter this correctly? Every answer is welcome, maybe there is also a different way do achieve this. thx for your help!!!

    Read the article

  • confusion about using types instead of gtts in oracle

    - by Omnipresent
    I am trying to convert queries like below to types so that I won't have to use GTT: insert into my_gtt_table_1 (house, lname, fname, MI, fullname, dob) (select house, lname, fname, MI, fullname, dob from (select 'REG' house, mbr_last_name lname, mbr_first_name fname, mbr_mi MI, mbr_first_name || mbr_mi || mbr_last_name fullname, mbr_dob dob from table_1 a, table_b where a.head = b.head and mbr_number = '01' and mbr_last_name = v_last_name) c above is just a sample but complex queries are bigger than this. the above is inside a stored procedure. So to avoid the gtt (my_gtt_table_1). I did the following: create or replace type lname_row as object ( house varchar2(30) lname varchar2(30), fname varchar2(30), MI char(1), fullname VARCHAR2(63), dob DATE ) create or replace type lname_exact as table of lname_row Now in the SP: type lname_exact is table of <what_table_should_i_put_here>%rowtype; tab_a_recs lname_exact; In the above I am not sure what table to put as my query has nested subqueries. query in the SP: (I am trying this for sample purpose to see if it works) select lname_row('', '', '', '', '', '', sysdate) bulk collect into tab_a_recs from table_1; I am getting errors like : ORA-00913: too many values I am really confused and stuck with this :(

    Read the article

  • Tracking the linux config with git: how?

    - by Pierre
    I'd like to track my linux configurations with git. My idea is to have a branch for each server. /etc is not the only one directory to be tracked (I won't git init in '/etc' ) As far as I could see, it is possible to init a git for a distant directory. I tried this: # mkdir -p /git/.git # cd /git # git --work-tree=/ --git-dir=/git/.git init Initialized empty Git repository in /git/.git/ 1) Creating a new branch before everything is not possible # git branch server1 fatal: Not a valid object name: 'HEAD'. 2) adding a file in master/HEAD is not possible # touch README.md # git add README.md fatal: Unable to create '//.git/index.lock': No such file or directory how should I properly setup git to track my system-config ? Thanks. P.

    Read the article

  • No longer able to run photo software on XP

    - by Peter
    I have Olympus Master 2 and Photoimpression 6 on my system which is running with XP Pro. Both programmes ran fine till recently and neither now works. I have run Avira and AVG anti virus (only one loaded at a time), Superantispyware, Spybot and Malwarebytes and removed the usual dross but no help. I have also uninstalled and reinstalled both several times but again no help. Both programmes load but the Olympus one immediately passes to the usual 'its met a problem and needs to close'...Photoimpression loads fully but as soon as I go to the edit screen it gives the same message. I have tried another profile to no avail. Any idea why this is and what the solution is??

    Read the article

  • Using reflection to find all linq2sql tables and ensure they match the database

    - by Jake Stevenson
    I'm trying to use reflection to automatically test that all my linq2sql entities match the test database. I thought I'd do this by getting all the classes that inherit from DataContext from my assembly: var contexttypes = Assembly.GetAssembly(typeof (BaseRepository<,>)).GetTypes().Where( t => t.IsSubclassOf(typeof(DataContext))); foreach (var contexttype in contexttypes) { var context = Activator.CreateInstance(contexttype); var tableProperties = type.GetProperties().Where(t=> t.PropertyType.Name == typeof(ITable<>).Name); foreach (var propertyInfo in tableProperties) { var table = (propertyInfo.GetValue(context, null)); } } So far so good, this loops through each ITable< in each datacontext in the project. If I debug the code, "table" is properly instantiated, and if I expand the results view in the debugger I can see actual data. BUT, I can't figure out how to get my code to actually query that table. I'd really like to just be able to do table.FirstOrDefault() to get the top row out of each table and make sure the SQL fetch doesn't fail. But I cant cast that table to be anything I can query. Any suggestions on how I can make this queryable? Just the ability to call .Count() would be enough for me to ensure the entities don't have anything that doesn't match the table columns.

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >