Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 490/858 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • forward slash problem in xsl ams xsql

    - by Peter Kaleta
    Hi I have simple xsql <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="zad1.xsl" ?> <page xmlns:xsql="urn:oracle-xsql" connection="java:comp/env/jdbc/mondialDS"> <xsql:query max-rows="-1" null-indicator="no" tag-case="lower" rowset-element="continents"> select name as continent from mondial_user.Continent order by 1 </xsql:query> </page> which gives me a list of continents with "australia/oceania" among them i use XSL on above xsql : <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <!-- Root template --> <res> <xsl:template match="/continents"> <xsl:for-each select="row"> <re> <xsl:value-of select="continent"/> </re> </xsl:for-each> </xsl:template> </res> </xsl:stylesheet> ANd firefox throws error : on wrong formated xml document with : AfricaAmericaAsiaAustralia/OceaniaEurope -----------------------------------^ Help apreciated

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • php: showing my country based on my IP, mysql optimized

    - by andufo
    I'm downloaded WIPmania's worldip table from http://www.wipmania.com/en/base/ -- the table has 3 fields and around 79k rows: startip // example: 3363110912 endip // example: 3363112063 country // example: AR (Argentina) So, lets suppose i'm in Argentina and my IP address is: 200.117.248.17 1) I use this function to convert my ip to long function ip_address_to_number($ip) { if(!$ip) { return false; } else { $ip = split('\.',$ip); return($ip[0]*16777216 + $ip[1]*65536 + $ip[2]*256 + $ip[3]); } } 2) I search for the proper country code by matching the long converted ip: $sql = 'SELECT * FROM worldip WHERE '.ip_address_to_number($_SERVER['REMOTE_ADDR']).' BETWEEN startip AND endip'; which is equivalent to: SELECT country FROM worldip WHERE 3363174417 BETWEEN startip AND endip (benchmark: Showing rows 0 - 0 (1 total, Query took 0.2109 sec)) Now comes the real question. What if another bunch of argentinian guys also open the website and they all have these ip addresses: 200.117.248.17 200.117.233.10 200.117.241.88 200.117.159.24 Since i'm caching all the sql queries; instead of matching EACH of the ip queries in the database, would it be better (and right) just to match the 2 first sections of the ip by modifying the function like this? function ip_address_to_number($ip) { if(!$ip) { return false; } else { $ip = split('\.',$ip); return($ip[0]*16777216 + $ip[1]*65536); } } (notice that the 3rd and 4th splitted values of the IP have been removed). That way instead of querying these 4 values: 3363174417 3363170570 3363172696 3363151640 ...all i have to query is: 3363110912 (which is 200.117.0.0 converted to long). Is this right? any other ideas to optimize this process? Thanks!

    Read the article

  • Comparing two date ranges within the same table

    - by Danny Herran
    I have a table with sales per store as follows: SQL> select * from sales; ID ID_STORE DATE TOTAL ---------- -------- ---------- -------------------------------------------------- 1 1 2010-01-01 500.00 2 1 2010-01-02 185.00 3 1 2010-01-03 135.00 4 1 2009-01-01 165.00 5 1 2009-01-02 175.00 6 5 2010-01-01 130.00 7 5 2010-01-02 135.00 8 5 2010-01-03 130.00 9 6 2010-01-01 100.00 10 6 2010-01-02 12.00 11 6 2010-01-03 85.00 12 6 2009-01-01 135.00 13 6 2009-01-02 400.00 14 6 2009-01-07 21.00 15 6 2009-01-08 45.00 16 8 2009-01-09 123.00 17 8 2009-01-10 581.00 17 rows selected. What I need to do is to compare two date ranges within that table. Lets say I need to know the differences in sales between 01 Jan 2009 to 10 Jan 2009 AGAINST 01 Jan 2010 to 10 Jan 2010. I'd like to build a query that returns something like this: ID_STORE_A DATE_A TOTAL_A ID_STORE_B DATE_B TOTAL_B ---------- ---------- --------- ---------- ---------- ------------------- 1 2010-01-01 500.00 1 2009-01-01 165.00 1 2010-01-02 185.00 1 2009-01-02 175.00 1 2010-01-03 135.00 1 NULL NULL 5 2010-01-01 130.00 5 NULL NULL 5 2010-01-02 135.00 5 NULL NULL 5 2010-01-03 130.00 5 NULL NULL 6 2010-01-01 100.00 6 2009-01-01 135.00 6 2010-01-02 12.00 6 2009-01-02 400.00 6 2010-01-03 85.00 6 NULL NULL 6 NULL NULL 6 2009-01-07 21.00 6 NULL NULL 6 2009-01-08 45.00 6 NULL NULL 8 2009-01-09 123.00 6 NULL NULL 8 2009-01-10 581.00 So, even if there are no sales in one range or another, it should just fill the empty space with NULL. So far, I've come up with this quick query, but I the "dates" from sales to sales2 sometimes are different in each row: SELECT sales.*, sales2.* FROM sales LEFT JOIN sales AS sales2 ON (sales.id_store=sales2.id_store) WHERE sales.date >= '2010-01-01' AND sales.date <= '2010-01-10' AND sales2.date >= '2009-01-01' AND sales2.date <= '2009-01-10' ORDER BY sales.id_store ASC, sales.date ASC, sales2.date ASC What am I missing?

    Read the article

  • Samba - Is my server vulnerable to CVE-2008-1105?

    - by Joao Heleno
    Hi! I have a CentOS server that is running Samba and I want to verify the vulnerability addressed by CVE-2008-1105. What scenarios can I build in order to run the exploit that is mentioned in http://secunia.com/advisories/cve_reference/CVE-2008-1105/? http://secunia.com/secunia_research/2008-20/advisory/ says that "Successful exploitation allows execution of arbitrary code by tricking a user into connecting to a malicious server (e.g. by clicking an "smb://" link) or by sending specially crafted packets to an "nmbd" server configured as a local or domain master browser." More info: http://www.samba.org/samba/security/CVE-2008-1105.html http://secunia.com/secunia_research/2008-20/advisory/

    Read the article

  • MySQL replication ignore data changes but not table structure changes

    - by Ed Manet
    Is there a way to setup MySQL replication so that CREATE TABLE and ALTER TABLE statements get replicated but INSERT, DELETE and UPDATE statements do not? I've got replication working fine and have several tables that are ignored as per the requirements. But we have a requirement that the slaves have an empty copy of the ignored table. We create those empty copies before we start replicating. Since the table is ignored, table structure changes don't get passed down from the master to the slave's empty copy. I know it's a strange requirement.

    Read the article

  • Considering building a new system; but undecided about chassis

    - by J.C. Bengtson
    I'm considering a new system build, and was thinking that this particular motherboard has features I need and like: http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=667651&pagenumber=1&RSort=1&csid=ITD&recordsPerPage=5&body=REVIEWS#CustomerReviewsBlock .. but am unsure which model chassis to pair it with. I'd strongly prefer something from Cooler Master, as I'm a fan of their products, but am having a hard time deciding, and also don't want to get into some odd situation where the board doesn't properly fit. I plan on having two optical drives (5.25"), two internal HDs (3.5"), and will likely go with an SLI setup of 2 or possibly even 3 cards, so I'd need a chassis that is roomy enough to accomodate all of that, as well as the motherboard itself. Based on the stock available at that same site, do you all have any suggestions? The larger, the better, as I hate having components crammed together. Your help is most appreciated!

    Read the article

  • Using a custom IList obtained through NHibernate

    - by Abu Dhabi
    Hi.I'm trying to write a web page in .NET, using C# and NHibernate 2.1. The pertinent code looks like this: var whatevervar = session.CreateSQLQuery("select thread_topic, post_time, user_display_name, user_signature, user_avatar, post_topic, post_body from THREAD, [USER], POST, THREADPOST where THREADPOST.thread_id=" + id + " and THREADPOST.thread_id=THREAD.thread_id and [USER].user_id=POST.user_id and POST.post_id=THREADPOST.post_id ORDER BY post_time;").List(); (I have tried to use joins in HQL, but then fell back on this query, due to HQL's unreadability.) The problem is that I'm getting a result that is incompatible with a repeater. When I try this: posts.DataSource = whatevervar.; posts.DataBind(); ...I get this: DataBinding: 'System.Object[]' does not contain a property with the name 'user_avatar'. In an earlier project, I used LINQ to SQL for this same purpose, and it looked like so: var whatevervar = from threads in context.THREADs join threadposts in context.THREADPOSTs on threads.thread_id equals threadposts.thread_id join posts1 in context.POSTs on threadposts.post_id equals posts1.post_id join users in context.USERs on posts1.user_id equals users.user_id orderby posts1.post_time where threads.thread_id == int.Parse(id) select new { threads.thread_topic, posts1.post_time, users.user_display_name, users.user_signature, users.user_avatar, posts1.post_body, posts1.post_topic }; That worked, and now I want to do the same with NHibernate. Unfortunately, I don't know how to make the repeater recognize the fields of the result of the query. Thanks in advance!

    Read the article

  • Fabric doesn't launch Nginx remotely

    - by endofu
    I want to be able to start and stop an nginx server on an Ubuntu EC2 instance with Fabric. I have this two scripts in my fabfile.py: def start_nginx(): sudo('/etc/init.d/nginx start') #also tried this: run('sudo /etc/init.d/nginx start') def stop_nginx(): sudo('/etc/init.d/nginx stop') The start_nginx() seemingly runs without errors (* Starting Nginx Server.../ ...done.) but doesn't start the server (or it dies immediately). If I SSH into the instance this starts nginx perfectly: sudo /etc/init.d/nginx start The stop_nginx() Fabric script stops the server remotely. I compiled nginx from source, using this http://nginx.org/download/nginx-1.1.9.tar.gz and using this script in /etc/init.d: https://github.com/JasonGiedymin/nginx-init-ubuntu/blob/master/nginx. The only thing I modified is this line: DAEMON=/usr/local/sbin/nginx to DAEMON=/usr/sbin/nginx because that's the path I used when I ./configure-d my compile. Does anyone have any idea why the init script behaves differently being called from Fabric?

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • How to keep groups when pulling with git

    - by mimrock
    I have a staging site that is a working directory of a git repository. How to set up git to let a developer pull out a branch or release without changing the group of the modified files? An example. Let's say I have two developers, robin and david. They are both in git-users group, so initially they can both have write permissions on site.php. -rw-rw-r-- 1 robin git-users 46068 Nov 16 12:12 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git After robin-server1$ git pull origin master: -rw-rw-r-- 1 robin robin 46068 Nov 16 12:35 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git And david do not have write permissions on site.php, because the group changed from 'git-users' to 'robin'. From now on, david will get a permission denied, when he tries to pull to this repository.

    Read the article

  • Not getting response using SOAP and PHP.

    - by Nitish
    I'm using PHP5 and NuSOAP - SOAP Toolkit for PHP. I created the server using the code below: <?php function getStockQuote($symbol) { mysql_connect('localhost','user','pass'); mysql_select_db('test'); $query = "SELECT stock_price FROM stockprices WHERE stock_symbol = '$symbol'"; $result = mysql_query($query); $row = mysql_fetch_assoc($result); echo $row['stock_price']; } $a=require('lib/nusoap.php'); $server = new soap_server(); $server->configureWSDL('stockserver', 'urn:stockquote'); $server->register("getStockQuote", array('symbol' => 'xsd:string'), array('return' => 'xsd:decimal'), 'urn:stockquote', 'urn:stockquote#getStockQuote'); $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server->service($HTTP_RAW_POST_DATA); ?> The client has the following code: <?php require_once('lib/nusoap.php'); $c = new soapclientNusoap('http://localhost/stockserver.php?wsdl'); $stockprice = $c->call('getStockQuote', array('symbol' => 'ABC')); echo "The stock price for 'ABC' is $stockprice."; ?> The database was created using the code below: CREATE TABLE `stockprices` ( `stock_id` INT UNSIGNED NOT NULL AUTO_INCREMENT , `stock_symbol` CHAR( 3 ) NOT NULL , `stock_price` DECIMAL(8,2) NOT NULL , PRIMARY KEY ( `stock_id` ) ); INSERT INTO `stockprices` VALUES (1, 'ABC', '75.00'); INSERT INTO `stockprices` VALUES (2, 'DEF', '45.00'); INSERT INTO `stockprices` VALUES (3, 'GHI', '12.00'); INSERT INTO `stockprices` VALUES (4, 'JKL', '34.00'); When I run the client the result I get is this: The stock price for 'ABC' is . 75.00 is not being printed as the price.

    Read the article

  • Dynamic Linq Property Converting to Sql

    - by Matthew Hood
    I am trying to understand dynamic linq and expression trees. Very basically trying to do an equals supplying the column and value as strings. Here is what I have so far private IQueryable<tblTest> filterTest(string column, string value) { TestDataContext db = new TestDataContext(); // The IQueryable data to query. IQueryable<tblTest> queryableData = db.tblTests.AsQueryable(); // Compose the expression tree that represents the parameter to the predicate. ParameterExpression pe = Expression.Parameter(typeof(tblTest), "item"); Expression left = Expression.Property(pe, column); Expression right = Expression.Constant(value); Expression e1 = Expression.Equal(left, right); MethodCallExpression whereCallExpression = Expression.Call( typeof(Queryable), "Where", new Type[] { queryableData.ElementType }, queryableData.Expression, Expression.Lambda<Func<tblTest, bool>>(e1, new ParameterExpression[] { pe })); // Create an executable query from the expression tree. IQueryable<tblTest> results = queryableData.Provider.CreateQuery<tblTest>(whereCallExpression); return results; } That works fine for columns in the DB. But fails for properties in my code eg public partial class tblTest { public string name_test { get { return name; } } } Giving an error cannot be that it cannot be converted into SQL. I have tried rewriting the property as a Expression<Func but with no luck, how can I convert simple properties so they can be used with linq in this dynamic way? Many Thanks

    Read the article

  • How to have supervisord follows the new unicorn process after USR2 rolling restart?

    - by ybart
    I have configured supervisord to track my unicorn server process. When I send USR2 process, this performs a rolling restart. After this operation the old unicorn master have restarted and then changed PID. This caused supervisor to lose track of the unicorn process considering it as EXITED. How can I have supervisord to follow the new unicorn process after this operation ? Unicorn has a PID file available, but I have not found an option in supervisord configuration for this. An other option would be to have supervisord to send itself the USR2 signal, but I don't know how to perform this and whether it will prevent my problem from occurring.

    Read the article

  • How to group data changes by operation with MySQL triggers

    - by Jan-Henk
    I am using triggers in MySQL to log changes to the data. These changes are recorded on a row level. I can now insert an entry in my log table for each row that is changed. However, I also need to record the operation to which the changes belong. For example, a delete operation like "DELETE * FROM table WHERE type=x" can delete multiple rows. With the trigger I can insert an entry for each deleted row into the log table, but I would like to also provide a unique identifier for the operation as a whole, so that the log table looks something like: log_id operation_id tablename fieldname oldvalue newvalue 1 1 table id 1 null 2 1 table type a null 3 1 table id 2 null 4 1 table type a null 5 2 table id 3 null 6 2 table type b null 7 2 table id 4 null 8 2 table type b null Is there a way in MySQL to identify the higher level operation to which the row changes belong? Or is this only possible by means of application level code? In the future it would also be nice to be able to record the transaction to which an operation belongs. Another question is if it is possible to capture the actual SQL query, besides using the query log. I don't think so myself, but maybe I am missing something. It is of course possible to capture these at the application level, but the goal is to keep intrusions to the application level code as minimal as possible. When this is not possible with MySQL, how is this with other database systems? For the current project it is not an option to use something other than MySQL, but it would be nice to know for future projects.

    Read the article

  • How to create a list of data clickable? asp.net

    - by Roger Filipe
    Hello, I am learning asp.net but I'm finding some problems. My doubt is how to make a list of titles of news contained in a database. And in this list, each of the titles when clicked is redirected to a page where you will be able to view the news in full (Title, Body, Author ..). What I got: - A database containing a table with the news, every news is associated with an identification code (ex: "ID"). - A page where you will make the listing. (Ex: site / listofnews.aspx) - I have a page that uses the method "querystring" to know what is the primarykey the news. (Ex: site/shownews.aspx?ID=12345, where "12345" is the primarykey of the news. Once it knows what is the primarykey of the news in the database, it loads each of the fields of the page (news.aspx) with the news, this part is working ok. - The data is retrieve using the Linq, so I receive a List of "News", the class "News" as ID, Title, Body, Author.. My doubt is how to make the listing clickable. In php I used this method (make a list of html links, in each link the href field is changed so that the tag "id" coincides with the news): //database used is oracle, $stmt is the query, you don´t need to understand how it works. oci_define_by_name($stmt, "ID", $id); oci_define_by_name($stmt, "TITLE", $title); if (!oci_execute($stmt, OCI_DEFAULT)) { $err = oci_error($stmt); trigger_error('Query failed: ' . $err['message'], E_USER_ERROR); } echo '<br />'; while (oci_fetch($stmt)) {<------While there is data from the database , create links $link = '<a href="site/shownews.php?ID='.$id.'" >'.$title.'</a>';<----the shownews.php?ID= $id, creates the link according with the ID echo $link<---Prints the link echo '<br />'; } How do i make the same with asp.net?

    Read the article

  • Why are my USB 2.0 devices hanging Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a DFI CM33-TL/G ATX (identified using SiSoft Sandra) motherboard hosting an Intel Celeron 1.3GHz CPU, 768Mb PC133 SDRAM, a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board based on the NEC uPD720102 chipset containing 3 external and 1 internal USB sockets. It's also hosting a 1.44Mb floppy drive on FDD0, a new 80Gb Western Digital hard drive running as master on IDE0 and a Panasonic DVD+/-RW running as master on IDE1. All this is sitting in a slimline case running off a Macron Power MPT-135 135W Flex power supply. The motherboard is running a version of Award BIOS 05/24/2002-601T-686B-6A6LID4AC-00. Could this be updated? If so, from where? I've raked through the manufacturer's website but can't find any hint of downloads for either drivers or BIOS updates. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • SQLAlchemy DetachedInstanceError with regular attribute (not a relation)

    - by haridsv
    I just started using SQLAlchemy and get a DetachedInstanceError and can't find much information on this anywhere. I am using the instance outside a session, so it is natural that SQLAlchemy is unable to load any relations if they are not already loaded, however, the attribute I am accessing is not a relation, in fact this object has no relations at all. I found solutions such as eager loading, but I can't apply to this because this is not a relation. I even tried "touching" this attribute before closing the session, but it still doesn't prevent the exception. What could be causing this exception for a non-relational property even after it has been successfully accessed once before? Any help in debugging this issue is appreciated. I will meanwhile try to get a reproducible stand-alone scenario and update here. Update: This is the actual exception message with a few stacks: File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 159, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 377, in get value = callable_(passive=passive) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/state.py", line 280, in __call__ self.manager.deferred_scalar_loader(self, toload) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/mapper.py", line 2323, in _load_scalar_attributes (state_str(state))) DetachedInstanceError: Instance <ReportingJob at 0xa41cd8c> is not bound to a Session; attribute refresh operation cannot proceed The partial model looks like this: metadata = MetaData() ModelBase = declarative_base(metadata=metadata) class ReportingJob(ModelBase): __tablename__ = 'reporting_job' job_id = Column(BigInteger, Sequence('job_id_sequence'), primary_key=True) client_id = Column(BigInteger, nullable=True) And the field client_id is what is causing this exception with a usage like the below: Query: jobs = session \ .query(ReportingJob) \ .filter(ReportingJob.job_id == job_id) \ .all() if jobs: # FIXME(Hari): Workaround for the attribute getting lazy-loaded. jobs[0].client_id return jobs[0] This is what triggers the exception later out of the session scope: msg = msg + ", client_id: %s" % job.client_id

    Read the article

  • Looping through a SimpleXML object, or turning the whole thing into an array.

    - by Coffee Cup
    I'm trying to work out how to iterate though a returned SimpleXML object. I'm using a toolkit called Tarzan AWS, which connects to Amazon Web Services (SimpleDB, S3, EC2, etc). I'm specifically using SimpleDB. I can put data into the Amazon SimpleDB service, and I can get it back. I just don't know how to handle the SimpleXML object that is returned. The Tarzan AWS documentation says this: Look at the response to navigate through the headers and body of the response. Note that this is an object, not an array, and that the body is a SimpleXML object. Here's a sample of the returned SimpleXML object: [body] = SimpleXMLElement Object ( [QueryWithAttributesResult] = SimpleXMLElement Object ( [Item] = Array ( [0] = SimpleXMLElement Object ( [Name] = message12413344443260 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = john ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is a message. ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334444 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413344443260 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.1 ) ) ) [1] = SimpleXMLElement Object ( [Name] = message12413346907303 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = fred ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is another message ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334690 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413346907303 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.2 ) ) ) ) So what code do I need to get through each of the object items? I'd like to loop through each of them and handle it like a returned mySQL query. For example, I can query SimpleDB and then loop though the SimpleXML so I can display the results on the page. Alternatively, how do you turn the whole shebang into an array? I'm new to SimpleXML, so I apologise if my questions aren't specific enough.

    Read the article

  • How to stop basic Postfix after-queue script from BCC-ing sender?

    - by mjbraun
    I'm building a content filter for Postfix (2.9.3 package installed via apt on an Ubuntu 12.04 test VM) and I'm starting with a very basic Ruby (1.9.3) template and building up functionality. Strangely, when the script is enabled, messages sent are being forwarded on as normal, but also sent back to the sender which is not normal. Disabling the script disables this behavior. Any suggestions about what I have to change to stop that from happening? Thanks for any advice! /etc/postfix/master.cf (only the lines changed from the default) smtp inet n - - - - smtpd -o content_filter=dumper:dummy ... dumper unix - n n - 10 pipe flags=RF user=mailuser argv=/home/mailuser/mailfilter/dumper.rb ${sender} ${recipient}` /home/mailuser/mailfilter/dumper.rb #!/usr/bin/env ruby require 'open3' dir="/home/mailuser/emails" logfile="maillog.log" message = $stdin.read cmd = "/usr/sbin/sendmail -G -i #{ARGV[0]} #{ARGV[1]}" stdin, stdouterr, wait_thr = Open3.popen2e(cmd) stdin.print(message) logfile = File.open("#{dir}/#{logfile}", 'a') logfile.write(stdouterr) stdin.close stdouterr.close exit(0)

    Read the article

  • How to use Binary Log file for Auditing and Replicating in MySQL?

    - by Pranav
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • Using Excel as front end to Access database (with VBA)

    - by Alex
    I am building a small application for a friend and they'd like to be able to use Excel as the front end. (the UI will basically be userforms in Excel). They have a bunch of data in Excel that they would like to be able to query but I do not want to use excel as a database as I don't think it is fit for that purpose and am considering using Access. [BTW, I know Access has its shortcomings but there is zero budget available and Access already on friend's PC] To summarise, I am considering dumping a bunch of data into Access and then using Excel as a front end to query the database and display results in a userform style environment. Questions: How easy is it to link to Access from Excel using ADO / DAO? Is it quite limited in terms of functionality or can I get creative? Do I pay a performance penalty (vs.using forms in Access as the UI)? Assuming that the database will always be updated using ADO / DAO commands from within Excel VBA, does that mean I can have multiple Excel users using that one single Access database and not run into any concurrency issues etc.? Any other things I should be aware of? I have strong Excel VBA skills and think I can overcome Access VBA quite quickly but never really done Excel / Access link before. I could shoehorn the data into Excel and use as a quasi-database but that just seems more pain than it is worth (and not a robust long term solution) Any advice appreciated. Alex

    Read the article

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • DataTable vs. Collection in .Net

    - by B Pete
    I am writing a program that needs to read a set of records that describe the register map of a device I need to communicate with. Each record will have a handfull of fields that describe the properties of each register. I don't really need to edit or modify the data in my VB or C# program, though I would like to be able to display the data on a grid. I would like to store the data in a CSV file, or perhaps an XML file. I need to enable users to edit the data off-line, preferably in excel. I am considering using a DataTable or a Collection of "Register" objects (which I would define). I prototyped a DataTable, and found I can read/write XML easily using the built in methods and I can easily bind to a DataGridView. I was not able to find a way to retreive info on a single register without using a query that returns a collection of rows, even though I defined a unique primaty key column. The syntax to get a value from a column is also complex, though I could be missing something on both counts. I'm tempted to use a collection of "Register" objects that I can access via a unique key. It would be a little more coding up front, but seems like a cleaner solution overall. I should still be able to use LINQ to dataset to query subsets of registers when I need them, but would also be able to grab a single field using a the key value, something like this: Registers(keyValue).fieldName). Which would be a cleaner approach to the problem? Is there a way to read/write XML into a Collection without needing custom code? Could this be accomplished using String for a key?

    Read the article

  • Sort on DataView does not work if DataTable has zero rows

    - by BigBlondeViking
    We have a WPF app that has a DataGrid insode a ListView. private DataTable table_; We do a bunch or dynamic column generation ( depending on the report we are showing ) We then do the a query and fill the DataTable row by row, this query may or may not have data.( not the problem, an empty grid is expected ) We set the ListView's ItemsSource to the DefaultView of the DataTable. lv.ItemsSource = table_.DefaultView; We then (looking at the user's pass usage of the app, set the sort on the column) Sort Method below: private void Sort(string sortBy, ListSortDirection direction) { var dataView = CollectionViewSource.GetDefaultView(lv.ItemsSource); dataView.SortDescriptions.Clear(); var sd = new SortDescription(sortBy, direction); dataView.SortDescriptions.Add(sd); dataView.Refresh(); } In the Zero DataTable rows scenario, the sort does not "hold"? and if we dynamically add rows they will not be in sorted order. If the DataTable has at-least 1 row when the sort is applied, and we dynamically add rows to the DataTable, the rows com in sorted correctly. I have built a standalone app that replicate this... It is an annoyance and I can add a check to see if the DataTable was empty, and re-sort... Anyone know whats going on here, and am I doing something wrong? FYI: What we based this off if comes from the MSDN as well: http://msdn.microsoft.com/en-us/library/ms745786.aspx

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >