Search Results

Search found 27248 results on 1090 pages for 'table adapter'.

Page 458/1090 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Enabling "USB Printing Support" in Windows 7

    - by Kevin Dente
    I'm trying to use an old parallel-port based printer with a USB-to-parallel port adapter on Windows 7. When I plug it into the USB port on the computer it's listed as an unrecognized device. I know that these cables typically use the "USB Printing Support" driver with makes USB ports show up as printer ports in the printer dialog. Is there a way to manually add USB Printing Support to Windows 7, since it isn't being added automatically?

    Read the article

  • Mysql partitioning: Partitions outside of date range is included

    - by Sturlum
    Hi, I have just tried to configure partitions based on date, but it seems that mysql still includes a partition with no relevant data. It will use the relevant partition but also include the oldest for some reason. Am I doing it wrong? The version is 5.1.44 (MyISAM) I first added a few partitions based on "day", which is of type "date" ALTER TABLE ptest PARTITION BY RANGE(TO_DAYS(day)) ( PARTITION p1 VALUES LESS THAN (TO_DAYS('2009-08-01')), PARTITION p2 VALUES LESS THAN (TO_DAYS('2009-11-01')), PARTITION p3 VALUES LESS THAN (TO_DAYS('2010-02-01')), PARTITION p4 VALUES LESS THAN (TO_DAYS('2010-05-01')) ); After a query, I find that it uses the "old" partition, that should not contain any relevant data. mysql> explain partitions select * from ptest where day between '2010-03-11' and '2010-03-12'; +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | ptest | p1,p4 | range | day | day | 3 | NULL | 79 | Using where | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ When I select a single day, it works: mysql> explain partitions select * from ptest where day = '2010-03-11'; +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | 1 | SIMPLE | ptest | p4 | ref | day | day | 3 | const | 39 | | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+

    Read the article

  • Using the internet connection of a Remote Desktop

    - by hattenn
    I want to use the internet connection of the servers at my university. I have a remote desktop account, and I have tried setting up VPN, but all VPN or proxy server software I could think of was blocked. Windows' built in VPN is blocked too. When I go to "Change Adapter Settings" and click on "File-New Incoming Connection", it says "Access denied." What would your suggestion be to use the internet connection of the remote desktop?

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • php - create columns from mysql list

    - by user271619
    I have a long list generated from a simple mysql select query. Currently (shown in the code below) I am simply creating list of table rows with each record. So, nothing complicated. However, I want to divide it into more than one column, depending on the number of returned results. I've been wrapping my brain around how to count this in the php, and I'm not getting the results I need. <table> <? $query = mysql_query("SELECT * FROM `sometable`"); while($rows = mysql_fetch_array($query)){ ?> <tr> <td><?php echo $rows['someRecord']; ?></td> </tr> <? } ?> </table> Obviously there's one column generated. So if the records returned reach 10, then I want to create a new column. In other words, if the returned results are 12, I have 2 columns. If I have 22 results, I'll have 3 columns, and so on.

    Read the article

  • Crosstab with multiple items

    - by Michael Wexler
    In SPSS, it is (relatively) easy to create a cross tab with multiple variables using the factors as the table heading. So, something like the following (made up data, etc.). Q1, Q2, and Q3 each have either a 1, a 2, or a 3 for each person. 1 (very Often) 2 (Rarely) 3 (Never) Q1. Likes it 12 15 13 Q2. Recommends it 22 11 10 Q3. Used it 22 12 9 In SPSS, one can even request row, column, or total percentages. I've tried table(), ftable(), xtab(), CrossTable() from gmodels, and CrossTable() from descr, and none of these can handle (afaik) multiple variables; they mostly seem to handle 1 variable crossed with another variable, and the 3rd creates layers. Is there a package with some good cross tabbing/table examples that I could use to figure this out? I'm sure I'm missing something simple, so I appreciate you pointing out what I missed. Perhaps I have to generate each row as a separate list and then make a dataframe and print the dataframe?

    Read the article

  • customer.name joining transactions.name vs. customer.id [serial] joining transactions.id [integer]

    - by Frank Computer
    INFORMIX-SQL 7.32 Pawnshop Application: one-to-many relationship where each customer (master) can have many transactions (detail). customer( id serial, pk_name char(30), {PATERNAL-NAME MATERNAL-NAME, FIRST-NAME MIDDLE-NAME} [...] ); unique index on id; unique cluster index on name; transaction( fk_name char(30), ticket_number serial, [...] ); dups cluster index on fk_name; unique index on ticket_number; Several people have told me this is not the correct way to join master to detail. They said I should always join customer.id[serial] to transactions.id[integer]. When a customer pawns merchandise, clerk queries the master using wildcards on name. The query usually returns several customers, clerk scrolls until locating the right name, enters a 'D' to change to detail transactions table, all transactions are automatically queried, then clerk enters an 'A' to add a new transaction. The problem with using customer.id joining transaction.id is that although the customer table is maintained in sorted name order, clustering the transaction table by fk_id groups the transactions by fk_id, but they are not in the same order as the customer name, so when clerk is scrolling through customer names in the master, the system has to jump allover the place to locate the clustered transactions belonging to each customer. As each new customer is added, the next id is assigned to that customer, but new customers dont show up in alphabetical order. I experimented using id joins and confirmed the decrease in performance. How can I use id joins instead of name joins and still preserve the clustered transaction order by name if transactions has no name column?

    Read the article

  • ATI Radeon 5850 video card one analog, one digital and one tv connected to hdmi, unable to run all 3

    - by McLovin
    I have one analog monitor connected through dvi adapter one digital monitor connected through dvi tv connected through hdmi I'm unable to run all 3 at the same time, when I try to extend desktop it asks me to disable one of the monitors so I can only have 2 at one time. I tried finding solution but according to some forums I should be able to achieve it since one is analog I should be able to run 3 outputs with hdmi that way. Does any one have any suggestions? Thank you.

    Read the article

  • Can anyone recommend a USB to DVI adaptor which runs well under Windows 7?

    - by dbruning
    Wanting to connect a third monitor to my aging Dell D830 laptop. It has 2 monitor outputs (VGA + DVI), and I'd like to add a third via USB This is the sort of thing I'm looking for ("ST Lab U-480 USB to DVI Adapter: http://www.ascent.co.nz/productspecification.aspx?ItemID=381944 ... but it doesn't explicitly support Windows 7 & the manufacturer's site doesn't show any driver updates. TIA

    Read the article

  • nHibernate storage of an object with self referencing many children and many parents

    - by AdamC
    I have an object called MyItem that references children in the same item. How do I set up an nhibernate mapping file to store this item. public class MyItem { public virtual string Id {get;set;} public virtual string Name {get;set;} public virtual string Version {get;set;} public virtual IList<MyItem> Children {get;set;} } So roughly the hbm.xml would be: <class name="MyItem" table="tb_myitem"> <id name="Id" column="id" type="String" length="32"> <generator class="uuid.hex" /> </id> <property name="Name" column="name" /> <property name="Version" column="version" /> <bag name="Children" cascade="all-delete-orphan" lazy="false"> <key column="children_id" /> <one-to-many class="MyItem" not-found="ignore"/> </bag> </class> This wouldn't work I don't think. Perhaps I need to create another class, say MyItemChildren and use that as the Children member and then do the mapping in that class? This would mean having two tables. One table holds the MyItem and the other table holds references from my item. NOTE: A child item could have many parents.

    Read the article

  • Not delete params in URL, when sorting [ RAILS 3 ]

    - by kamil
    I have sortable table columns, made like that http://asciicasts.com/episodes/228-sortable-table-columns And I have simply filter options for two columns in table, made at select_tag (GET method). This two function don't work together. When I change filter, the sort parameter disappear and inversely. <th><%= sortable "Id" %></th> <th> Status<br/> <form method="get"> <%= select_tag(:status, options_for_select([['All', 'all']]+@statuses, params[:status]),{:onchange => 'this.form.submit()'}) %> </th> <th><%= sortable "Operation" %></th> <th> Processor<br/> <%= select_tag(:processor, options_for_select([['All', 'all']]+@processor_names, params[:processor]),{:onchange => 'this.form.submit()'}) %> </form> </th>

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • Coupling an MFC CListCtrl and CTreeCtrl to get a view of the whole tree, not just one node at a time

    - by omatai
    Consider Windows Explorer (or regedit or similar). To the left side, there is a tree view, and to the right, a list view. In all cases I know of, the contents of the right view reflect the attributes of the selected node from the left pane. This is all well and good... but just not what I want. The nodes of the tree I want to display have a very few attributes (2-3) associated with each node - a reasonable amount to display horizontally as a row in a table. Rather than waste all that list view space on a single node with very few properties, I would like to have my list view display a table of the whole tree's properties (as the part of the tree currently expanded). So the nth line in the left view (tree) will correspond directly to the nth line in the right view (list/table), and I will get a decent overview of the properties of my tree. Does anyone know of code that does this? I am guessing that slaving a CListCtrl to a CTreeCtrl would be the way to go, and somehow overriding the vertical scrolling functions so that they are locked together. I'm just not sure that it is possible to lock the scrolls together like this... among other things! All advice gratefully welcomed :-)

    Read the article

  • sql statement question. Need to query 3 tables in one go!

    - by Stefan
    Hey there, I have an sql database. In this database is 3 tables I need to query. The first table has all the item info called item and the other two tables has data for votes and comments called userComment and the third for votes called userItem I currently have a function which uses this sql query to get the latest more popular (in terms of both votes and comments): $sql = "SELECT itemID, COUNT(*) AS cnt FROM ( SELECT `itemID` FROM `userItem` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY UNION ALL SELECT `itemID` FROM `userComment` WHERE FROM_UNIXTIME( `time` ) >= NOW() - INTERVAL 1 DAY AND `itemID` > 0 ) q GROUP BY `itemID` ORDER BY cnt DESC"; I know how to change this for either by votes alone or comments.... HOWEVER - I need to query the database to only return the itemID's of the ones which have specific conditions in only the item table these are WHERE categoryID = 'xx' AND typeID = 'xx' If the sql ninja could please help me on this one? Do I have to first return the results from the above query and the for each in the array fetched then check each against the item table and see if it fits the conditions to build a new array - or is that overkill? Thanks, Stefan

    Read the article

  • Setting <td> value using jquery

    - by timk
    Hello, I have the div structure shown below. For the second <td> in the table i want to replace &nbsp; with a hyperlink whose href attribute is stored in the variable myLink. How can i do this with jquery ? Please help. Thank You. <div class="pbHeader"> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td class="pbTitle"> <h2 class="mainTitle">Transfer Membership</h2> </td> <td> &nbsp; </td> </tr> </tbody> </table> </div>

    Read the article

  • SQLAlchemy Mapping problem

    - by asdvalkn
    Dear Everyone, I am trying to sqlalchemy to correctly map my data. Note that a unified group is basically a group of groups. (One unifiedGroup maps to many groups but each group can only map to one ug). So basically this is the definition of my unifiedGroups: CREATE TABLE `unifiedGroups` ( `ugID` INT AUTO_INCREMENT, `gID` INT NOT NULL, PRIMARY KEY(`ugID`, `gID`), KEY( `gID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 ; Note that each row is a ugID, gID tuple. ( I do not know before hand how many gID is per ugID so this is probably the most sensible and simplest method). Definition for my UnifiedGroup class class UnifiedGroup(object): """UnifiedProduct behaves very much like a group """ def __init__(self, ugID): self.ugID=ugID #Added by mapping self.groups=False def __str__(self): return '<%s:%s>' % (self.ugID, ','.join( [g for g in self.groups])) These are my mapping tables: tb_groupsInfo = Table( 'groupsInfo', metadata, Column('gID', Integer, primary_key=True), Column('gName', String(128)), ) tb_unifiedGroups = Table( 'unifiedGroups', metadata, Column('ugID', Integer, primary_key=True), Column('gID', Integer, ForeignKey('groupsInfo.gID')), ) My mapper maps in the following manner: mapper( UnifiedGroup, tb_unifiedGroups, properties={ 'groups': relation( Group, backref='unifiedGroup') }) However, when I tried to do groupInstance.unifiedGroup, I am getting an empty list [], while groupInstance.unifiedGroup.groups returns me an error: AttributeError: 'InstrumentedList' object has no attribute 'groups' Traceback (most recent call last): File "Mapping.py", line 119, in <module> print p.group.unifiedGroup.groups AttributeError: 'InstrumentedList' object has no attribute 'groups' What is wrong?

    Read the article

  • odp.net SQL query retrieve set of rows from two input arrays.

    - by Karl Trumstedt
    I have a table with a primary key consisting of two columns. I want to retrieve a set of rows based on two input arrays, each corresponding to one primary key column. select pkt1.id, pkt1.id2, ... from PrimaryKeyTable pkt1, table(:1) t1, table(:2) t2 where pkt1.id = t1.column_value and pkt1.id2 = t2.column_value I then bind the values with two int[] in odp.net. This returns all different combinations of my resulting rows. So if I am expecting 13 rows I receive 169 rows (13*13). The problem is that each value in t1 and t2 should be linked. Value t1[4] should be used with t2[4] and not all the different values in t2. Using distinct solves my problem, but I'm wondering if my approach is wrong. Anyone have any pointers on how to solve this the best way? One way might be to use a for-loop accessing each index in t1 and t2 sequentially, but I wonder what will be more efficient. Edit: actually distinct won't solve my problem, it just did it based on my input-values (all values in t2 = 0)

    Read the article

  • Could not load drivers - Code 31

    - by alexander7567
    I get this error when installing any network adapter on my computer: The device is not working properly because Windows cannot load the drivers required for this device. (Code 31) I have tried many different adapters and many different drivers. Any ideas? OS: Windows XP Home SP3 Here is the Hardware IDs for the onboard NIC: PCI\VEN_8086&DEV_1050&SUBSYS_2019107B&REV_02 PCI\VEN_8086&DEV_1050&SUBSYS_2019107B PCI\VEN_8086&DEV_1050&CC_020000 PCI\VEN_8086&DEV_1050&CC_0200 Device Manager Screenshot

    Read the article

  • PHP GET issue - it aint workin!

    - by benhowdle89
    I have a simple PHP form which displays inputs with values from a mysql DB and send the form results to another page which updates a db table based on the GET results: echo "<form method='get' action='updateprojects.php'>"; echo "<table>"; echo "<tr>"; echo " <th>Project No</th> <th>Customer Name</th> <th>Description</th> </tr>"; while($row = mysql_fetch_array($result)) { echo "<tr>"; echo "<td><input value=" . $row['project_no'] . "></input></td>"; echo "<td><input value='" . $row['cust_name'] . "'></input></td>"; echo "<td><input value='" . $row['description'] . "'></input></td>"; echo "</tr>"; } echo "</table>"; echo "<input type='submit' value='Update' />"; echo "</form>"; on updateprojects.php i have set up the GET variable and even echoed them to check but nothing comes through! Cannot see why!? This is the start of updateprojects.php: echo $_GET['project_no'].$_GET['cust_name'].$_GET['description'];

    Read the article

  • Using group_by with fields_for and accepts_nested_attributes_for

    - by Derek
    I have a the following rails models: class Release < ActiveRecord::Base has_many :release_questionnaires, :dependent => :destroy accepts_nested_attributes_for :release_questionnaires ... end class class ReleaseQuestionnaire < ActiveRecord::Base belongs_to :release belongs_to :milestone ... end class In my view code, I have the following form. <% form_for @release, ... do |f| %> ... <table class="questionnaires"> <% f.fields_for :release_questionnaires, @release.release_questionnaires.sort_by{|ra| ra.questionnaire.name} do |builder| %> ... <% end %> </table> <% end %> This works and allows me to view and edit the questionnaires as desired. However, I have an additional requirement to break the questionnaires out into their own tables grouped by the milestone they are associated to, rather than in a single table. It appears as though the group_by method is design to accomplish this, but I cannot get it to work as desired inside the tag. It may be that I'm missing something obvious, as I am a beginner... Any help is appreciated.

    Read the article

  • Most efficient way to LIMIT results in a JOIN?

    - by johnnietheblack
    I have a fairly simple one-to-many type join in a MySQL query. In this case, I'd like to LIMIT my results by the left table. For example, let's say I have an accounts table and a comments table, and I'd like to pull 100 rows from accounts and all the associated comments rows for each. Thy only way I can think to do this is with a sub-select in in the FROM clause instead of simply selecting FROM accounts. Here is my current idea: SELECT a.*, c.* FROM (SELECT * FROM accounts LIMIT 100) a LEFT JOIN `comments` c on c.account_id = a.id ORDER BY a.id However, whenever I need to do a sub-select of some sort, my intermediate level SQL knowledge feels like it's doing something wrong. Is there a more efficient, or faster, way to do this, or is this pretty good? By the way... This might be the absolute simplest way to do this, which I'm okay with as an answer. I'm simply trying to figure out if there IS another way to do this that could potentially compete with the above statement in terms of speed.

    Read the article

  • Aggregate path counts using HierarchyID

    - by austincav
    Business problem - understand process fallout using analytics data. Here is what we have done so far: Build a dictionary table with every possible process step Find each process "start" Find the last step for each start Join dictionary table to last step to find path to final step In the final report output we end up with a list of paths for each start to each final step: User Fallout Step HierarchyID.ToString() A 1/1/1 B 1/1/1/1/1 C 1/1/1/1 D 1/1/1 E 1/1 What this means is that five users (A-E) started the process. Assume only User B finished, the other four did not. Since this is a simple example (without branching) we want the output to look as follows: Step Unique Users 1 5 2 5 3 4 4 2 5 1 The easiest solution I could think of is to take each hierarchyID.ToString(), parse that out into a set of subpaths, JOIN back to the dictionary table, and output using GROUP BY. Given the volume of data, I'd like to use the built-in HierarchyID functions, e.g. IsAncestorOf. Any ideas or thoughts how I could write this? Maybe a recursive CTE?

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • ASP.NET MVC4: How do I keep from having multiple identical results in my lookup tables

    - by sehummel
    I'm new to ASP.NET so this may be an easy question. I'm seeding my database with several rows of dummy data. Here is one of my rows: new Software { Title = "Microsoft Office", Version = "2012", SerialNumber = "12346231434543", Platform = "PC", Notes = "Macs really rock!", PurchaseDate = "2011-12-04", Suite = true, SubscriptionEndDate = null, SeatCount = 0, SoftwareTypes = new List<SoftwareType> { new SoftwareType { Type="Suite" }}, Locations = new List<Location> { new Location { LocationName = "Paradise" }}, Publishers = new List<SoftwarePublisher> { new SoftwarePublisher { Publisher = "Microsoft" }}} But when I do this, a new row is created for each location, with the LocationName being set in each row like this. We only have two locations. How do I get it to create a LocationID property for the Software class and in my Locations class. Here is my Location class: public class Location { public int Id { get; set; } [Required] [StringLength(20)] public string LocationName { get; set; } public virtual Software Software { get; set; } } I have this line in my Software class to reference this table: public virtual List<Location> Locations { get; set; } Again, what I want when I am done is a Locations table with two entries, and a LocationID field in my Software table. How do I do this?

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >