Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 728/911 | < Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >

  • Kendo UI, searchable Combobox

    - by user2083524
    I am using the free Kendo UI Core Framework. I am looking for a searchable Combobox that fires the sql after inserting, for example, 2 letters. Behind my Listbox there are more than 10000 items and now it takes too much time when I load or refresh the page. Is it possible to trigger the sql query only by user input like the autocomplete widget do? My code is: <link href="test/styles/kendo.common.min.css" rel="stylesheet" /> <link href="test/styles/kendo.default.min.css" rel="stylesheet" /> <script src="test/js/jquery.min.js"></script> <script src="test/js/kendo.ui.core.min.js"></script> <script> $(document).ready(function() { var objekte = $("#objekte").kendoComboBox({ placeholder: "Objekt auswählen", dataTextField: "kurzname", dataValueField: "objekt_id", minLength: 2, delay: 0, dataSource: new kendo.data.DataSource({ transport: { read: "test/objects.php" }, schema: { data: "data" } }), }).data("kendoComboBox"); </script>

    Read the article

  • Cant get lm-sensors to load ATI Radeon temp and fan or output all settings

    - by woody
    New to Linux and having minor issues :/ . I followed this guide initially but did not recieve the proper output and did not show my ATI Radeon HD 5000 temp or fan speed. Then used this guide, same problems exhibited. No issues installing and no errors. I think its not reading i2c for some reason. The proprietary driver is installed and functioning correctly according fglrxinfo. I can use aticonfig commands and view both temp and fan. Any ideas on how to get the ATI Radeon sensors working under 'sensors'? When i run 'sudo sensors-detect' this is my ouput # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: LENOVO IdeaPad Y560 (laptop) # Board: Lenovo KL3 This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x8502 Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel 3400/5 Series (PCH) Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO) My output for 'sensors' is: acpitz-virtual-0 Adapter: Virtual device temp1: +58.0°C (crit = +100.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +56.0°C (high = +84.0°C, crit = +100.0°C) Core 1: +57.0°C (high = +84.0°C, crit = +100.0°C) Core 2: +58.0°C (high = +84.0°C, crit = +100.0°C) Core 3: +57.0°C (high = +84.0°C, crit = +100.0°C) and my '/etc/modules' is: # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. lp rtc # Generated by sensors-detect on Fri Nov 30 23:24:31 2012 # Chip drivers coretemp

    Read the article

  • [Newbie] How to join mysql tables

    - by Ivan
    I've an old table like this: user> id | name | address | comments And now I've to create an "alias" table to allow some users to have an alias name for some reasons. I've created a new table 'user_alias' like this: user_alias> name | user But now I have a problem due my poor SQL level... How to join both tables to generate something like this: 1 | my_name | my_address | my_comments 1 | my_alias | my_address | my_comments 2 | other_name | other_address | other_comments I mean, I want to make a "SELECT..." query that returns in the same format as the "user" table ALL users and ALL alias.. Something like this: SELECT user.* FROM user LEFT JOIN user_alias ON `user`=`id` but it doesn't work for me..

    Read the article

  • Postgresql 8.4 reading OID style BLOBs with Hibernate

    - by peter
    I am getting this weird case when querying Postgres 8.4 for some records with Blobs (of type OIDs) with Hibernate. The query does return all right but when my code wants to read the content of the BLOB with the simple code below, it gets 0 bytes back public static byte[] readBlob(Blob blob) throws Exception { InputStream is = null; try { is = blob.getBinaryStream(); return org.apache.commons.io.IOUtils.toByteArray(is); } finally { if (is != null) try { is.close(); } catch(Exception e) {} } } Funny think is that I am getting this behavior only since I've started adding more then one such records to the table. The underlying JDBC library is type 3 (postgresq 8.4-701). Can someone give me a hint as to how to solve this issue? Thanks Peter

    Read the article

  • Mysql CASE - WHEN - THEN - returning wrong data type (blob)

    - by Jack
    Hi there. Im creating customizable product attributes in a web store - each attribute can have different type of data, each data type is stored in a separate column using corresponding mysql datatype. I Have a query like: SELECT products.id AS id, products.sku AS sku, products.name AS name, products.url_key AS url_key, attributes.name AS attribute, CASE WHEN `attribute_types`.`type` = 'text' THEN product_attribute_values.value_text WHEN `attribute_types`.`type` = 'float' THEN product_attribute_values.value_float WHEN `attribute_types`.`type` = 'price' THEN product_attribute_values.value_float WHEN `attribute_types`.`type` = 'integer' THEN product_attribute_values.value_integer WHEN `attribute_types`.`type` = 'multiple' THEN product_attribute_values.value_text WHEN `attribute_types`.`type` = 'dropdown' THEN product_attribute_values.value_text WHEN `attribute_types`.`type` = 'date' THEN product_attribute_values.value_date WHEN `attribute_types`.`type` = 'textarea' THEN product_attribute_values.value_textarea END as value from (...) Now, the problem is that when attribute_types.type equals to ?some-type? i want it to return a value as it's stored in product_attribute_values table. Currently I get BLOb every time. Should I use type-casting or there's some behind-the-scene magic that I dont know about, OR maybe there's some better alternative ?

    Read the article

  • In Drupal 6, is there a way to take a custom field from the latest post to a taxonomy term, and disp

    - by user278457
    The title for this question pretty much sums up what I'm asking. I've got a list of taxonomy terms, and I'm using a view to display the latest post to each one. I'd like to also display a custom field set up in CCK just under this. Currently, I'm just using "date updated" of the taxonomy term itself which was easy to set up in views. I'd like to drill a little deeper and get the custom "event date" field I've added to the content type last posted to the taxonomy term I'm "viewing". I've got a feeling I'm going to have to write my own database query for this. If (I can avoid that){ How do I set up such a view? } Else{ What's the best practice for including lower level database queries alongside views? }

    Read the article

  • Tomcat, Hibernate and the java.io.EOFException

    - by Marco
    Hi, My Java application, which uses Hibernate and it's hosted by Tomcat 6.0, gets the following exception after a long time of inactivity when it tries to access the DB: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception: ** BEGIN NESTED EXCEPTION ** java.io.EOFException STACKTRACE: java.io.EOFException at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1963) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2375) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2874) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1623) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1715) at com.mysql.jdbc.Connection.execSQL(Connection.java:3249) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1268) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1403) at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) at org.hibernate.loader.Loader.getResultSet(Loader.java:1812) at org.hibernate.loader.Loader.doQuery(Loader.java:697) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) at org.hibernate.loader.Loader.doList(Loader.java:2232) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2129) at org.hibernate.loader.Loader.list(Loader.java:2124) at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401) at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:363) at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1149) at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) Any tips? Thanks

    Read the article

  • Linq to Entities and LEFT OUTER JOIN issue with MANY:1 relations

    - by Robert Koritnik
    Can somebody tell me, why does Linq to Entities translate many to 1 relationships to left outer join instead of inner join? Because there's referential constraint on DB itself that ensures there's a record in the right table, so inner join should be used instead (and it would work much faster) If relation was many to 0..1 left outer join would be correct. Question Is it possible to write LINQ in a way so it will translate to inner join rather than left outer join. It would speed query execution a lot... I haven't used eSQL before, but would it be wise to use it in instead of LINQ? Edit I updated my tags to include technology I'm using in the background: Entity Framework V1 Devart dotConnect for Mysql MySql database If someone could test if the same is true on Microsoft SQL server it would also give me some insight if this is Devart's issue or it's a general L2EF functionality... But I suspect EF is the culprit here.

    Read the article

  • How to create multiple tables with the same schema using SQLite jdbc

    - by Space_C0wb0y
    I want to split a large table horizontally, and I would like to make sure that all three of them have the same schema. Currently I am using this piece of code to create the tables: statement .executeUpdate("CREATE TABLE AnnotationsMolecularFunction (Id INTEGER PRIMARY KEY ASC AUTOINCREMENT, " + "ProteinId NOT NULL, " + "GOId NOT NULL, " + "UNIQUE (ProteinId, GOId)" + "FOREIGN KEY(ProteinId) REFERENCES Protein(Id))"); There is one such statement for each table. This is bad, because if I decide to change the schema later (which will most certainly happen), I will have to change it three times, which begs for errors, so I would like a way to make sure that the other tables have the same schema without explicitly writing it again. I can use: statement .executeUpdate("CREATE TABLE AnnotationsBiologicalProcess AS SELECT * FROM AnnotationsMolecularFunction"); to create the other tables with the same columns, but the constraints are not aplied. I could of course just generate the same query-string three times with different table-names in Java, but I would like to know if there is an SQL-way of achieving this.

    Read the article

  • How to combine specific column values in an SQL statement

    - by Mehper C. Palavuzlar
    There are two variables in our database called OB and B2B_OB. There are EP codes and each EP may have one of the following 4 combinations: EP OB B2B_OB --- -- ------ 3 X 7 X 11 14 X X What I want to do is to combine the X's in the fetched list so that I can have A or B value for a single EP. My conditions are: IF (OB IS NULL AND B2B_OB IS NULL) THEN TYPE = A IF (OB IS NOT NULL OR B2B_OB IS NOT NULL) THEN TYPE = B The list I want to fetch is like this one: EP TYPE --- ---- 3 B 7 B 11 A 14 B How can I do this in my query? TIA.

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • LINQ To Entities - Items, ItemCategories & Tags

    - by Simon
    Hi There, I have an Entity Model that has Items, which can belong to one or more categories, and which can have one or more tags. I need to write a query that finds all tags for a given ItemCategory, preferably with a single call to the database, as this is going to get called fairly often. I currently have: Dim q = (From ic In mContext.ItemCategory _ Where ic.CategoryID = forCategoryID _ Select ic).SelectMany(Function(cat) cat.Items).SelectMany(Function(i) i.Tags) _ .OrderByDescending(Function(t) t.Items.Count).ToList This is nearly there, apart from it doesn't contain the items for each tag, so I'd have to iterate through, loading the item reference to find out how many items each tag is related to (for font sizing). Ideally, I want to return a List(Of TagCount), which is just a structure, containing Tag as string, and Count as integer. I've looked at Group and Join, but I'm not getting anywhere so any help would be greatly appreciated!

    Read the article

  • UISearchbar clearButton forces the keyboard to appear

    - by DASKAjA
    I have a UISearchBar which acts as a live filter for a table view. When the keyboard is dismissed via endEditing:, the query text and the gray circular "clear" button remain. From here, if I tap the gray "clear" button the keyboard reappears as the text is cleared. How do I prevent this? If the keyboard is not currently open I want that button to clear the text without reopening the keyboard. There is a protocol method that gets called when I tap the clear button. But sending the UISearchBar a resignFirstResponder message doesn't have any effect on the keyboard.

    Read the article

  • Move million records from MEMORY table to MYISAM table.

    - by Prashant
    Hi, I am looking for a fast way to move records from a MEMORY table to MYISAM table. MEMORY table has around 0.5 million records. Both tables have exactly the same structure (same number of columns, data types etc.). But the MYISAM table is indexed (B-TREE) on a few columns. There are around 25 columns most of which are unsigned integers. I have already tried using "INSERT INTO SELECT * FROM " query. But is there any faster way to do this? Appreciate your help. Prashant

    Read the article

  • Create a Python User() class that both creates new users and modifies existing users

    - by ensnare
    I'm trying to figure out the best way to create a class that can modify and create new users all in one. This is what I'm thinking: class User(object): def __init__(self,user_id): if user_id == -1 self.new_user = True else: self.new_user = False #fetch all records from db about user_id self._populateUser() def commit(self): if self.new_user: #Do INSERTs else: #Do UPDATEs def delete(self): if self.new_user == False: return False #Delete user code here def _populate(self): #Query self.user_id from database and #set all instance variables, e.g. #self.name = row['name'] def getFullName(self): return self.name #Create a new user >>u = User() >>u.name = 'Jason Martinez' >>u.password = 'linebreak' >>u.commit() >>print u.getFullName() >>Jason Martinez #Update existing user >>u = User(43) >>u.name = 'New Name Here' >>u.commit() >>print u.getFullName() >>New Name Here Is this a logical and clean way to do this? Is there a better way? Thanks.

    Read the article

  • Microsoft SQL Server 2005 - using the modulo operator

    - by cc0
    So I have a silly problem, I have not used much SQL Server before, or any SQL for that matter. I basically have a minor mathematical problem that I need solved, and I thought modulo would be good. I have a number of dates in the database, but I need them be rounded off to the closest [dynamic integer] (could be anything from 0 to 5000000) which will be input as a parameter each time this query is called. So I thought I'd use modulo to find the remainder, then subtract that remainder from the date. If there is a better way, or an integrated function, please let me know! What would be the syntax for that? I've tried a lot of things, but I keep getting error messages like integers/floats/decimals can't be used with the modulo operators. I tried casting to all kinds of numeric datatypes. Any help would be appreciated.

    Read the article

  • how to handle DOM in PHP

    - by From.ME.to.YOU
    my PHP code $dom = new DOMDocument(); @$dom->loadHTML($file); $xpath = new DOMXPath($dom); $tags = $xpath->query('//div[@class="text"]'); foreach ($tags as $tag) { echo $tag->textContent; } what i'm trying to do here is to get the content of the div that has class 'text' but the problem when i loop and echo the results i only get the text i can't get the HTML code with images and all the HTML tags like p, br,img... etc i tried to use $tag-nodeValue; but also nothing worked out any help !!?

    Read the article

  • Fast search in XMl files in .NET (or How to index XML files)

    - by codymanix
    I have to implement a search feature which is able to quickly perform arbitrary complex queries to XML-data. If the user makes a query, all XML files must be searched to find possible matches. The users will have lots of XML-Files (a few 10000 or more) which are typically a few kilobytes in size. All the XML-files have almost the same structure. I already benchmarked XPath, it is too slow for my needs. How can it be done most efficiently? Is is possible to create indexes for the contents of the XML files (preserving content semantics, not just plain fulltext search)? Will it be useful to put the XML data into an (embedded) SQL database and do the queries with SQL? What other possibilities do I have?

    Read the article

  • DataModelSelection on list exposed via EntityQuery

    - by Kalpana
    Is it possible to have support for enabling DataModelSelection on a list page paginated via EntityQuery? (All the examples load the list by performing a query on the @Factory method). But, I would like to re-use the existing pagination mechanism and just enable the ability to support DataModelSelection on it. I am also assuming that DataModelSelection is capable of tracking a single row in a List, how do we extend this to support a certain action (say deletion, activation ...) of multiple rows. I am a newbie and would appreciate any help on this topic. I have already gone through the samples shipped as part of Seam (booking, message). I have already posted this in seam users forum, but I am yet to get a response

    Read the article

  • Ideablade Update

    - by Tolu
    Hi, I'm using IdeaBlade version 3.6. I noticed the following generated SQL update query : (@P1 nchar(32),@P2 nvarchar(32),@P3 nvarchar(512),@P4 nchar(32),@P5 int,@P6 nvarchar(32),@P7 int,@P8 datetime,@P9 datetime,@P10 datetime,@P11 int,@P12 datetime,@P13 int,@P14 int,@P15 int,@P16 nvarchar(32),@P17 nvarchar(128),@P18 nvarchar(32),@P19 nvarchar(32),@P20 datetime,@P21 datetime,@P22 bit,@P23 nvarchar(32),@P24 nvarchar(64),@P25 nchar(32))update "dbo"."GSS_Documents" set "DocumentID"=@P1,"FileName"=@P2,"FilePath"=@P3,"BusinessOfficeID"=@P4,"Pages"=@P5,"FileSize"=@P6,"DocumentType"=@P7,"DateCreated"=@P8,"EffectiveDateCreated"=@P9,"DateProcessed"=@P10,"ProcessorID"=@P11,"DateReviewed"=@P12,"ReviewerID"=@P13,"WorkflowStatus"=@P14,"ApprovalStatus"=@P15,"AccountNumber"=@P16,"AccountName"=@P17,"SerialNumber"=@P18,"TransactionID"=@P19,"CriticalDate"=@P20,"EmergencyDate"=@P21,"GenerateSMSAlert"=@P22,"CustomerPhoneNumber"=@P23,"CustomerEmailAddress"=@P24 where "DocumentID"=@P25 Problem is DocumentID is the primary key. This update appears to be updating the primary key as well! Any ideas on how to stop this?

    Read the article

  • Export a SQL database into a CSV file and use it with WEKA

    - by Simon
    How can I export a query result from a .sql database into a .csv file? I tried with SELECT * FROM players INTO OUTFILE 'players.csv' FIELDS TERMINATED BY ',' LINES TERMINATED BY ';';` and my .csv file is something like: p1,1,2,3 p2,1,4,5 But they are not in saparated columns, all are in 1 column. I tried to create a .csv file by myself just to try WEKA, something like: p1 1 2 3 p2 1 4 5 But WEKA recognizes p1 1 2 3 as a single attribute. So: how can I export correctly a table from a sql db to a csv file? And how can I use it with WEKA?

    Read the article

  • Prolog newbie question: Making a procedure to print Hello World

    - by dmindreader
    I want to load this simple something into my Editor: Write:-repeat,write("hi"),nl,fail. So that it prints "hi". What should I do? I'm currently trying to do File->New and Saving a file named Write into E:\Program Files\pl\xpce\prolog\lib When doing the query: ?-Write. It's printing: 1 ?- Write. % ... 1,000,000 ............ 10,000,000 years later % % >> 42 << (last release gives the question) Why?

    Read the article

  • MySQl Error #1064

    - by 01010011
    Hi, I keep getting this error: MySQL said: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO books.book(isbn10,isbn13,title,edition,author_f_name,author_m_na' at line 15 with this query: USE books; DROP TABLE IF EXISTS book; CREATE TABLE `books`.`book`( `book_id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, `isbn10` VARCHAR(15) NOT NULL, `isbn13` VARCHAR(15) NOT NULL, `title` VARCHAR(50) NOT NULL, `edition` VARCHAR(50) NOT NULL, `author_f_name` VARCHAR(50) NOT NULL, `author_m_name` VARCHAR(50) NOT NULL, `author_l_name` VARCHAR(50) NOT NULL, `cond` ENUM('as new','very good','good','fair','poor') NOT NULL, `price` DECIMAL(8,2) NOT NULL, `genre` VARCHAR(50) NOT NULL, `quantity` INT NOT NULL) INSERT INTO books.book(isbn10,isbn13,title,edition,author_f_name,author_m_name,author_l_name,cond,price,genre,quantity)** VALUES ('0136061699','978-0136061694','Software Engineering: Theory and Practice','4','Shari','Lawrence','Pfleeger','very good','50','Computing','2'); Any idea what the problem is?

    Read the article

  • jquery repeat process once get reply from php file

    - by air
    i have one php file which process adding of record in Database fro array. for example in array i have 5 items aray an='abc','xyz','ert','wer','oiu' i want to call one php file in j query ajax method um = an.split(','); var counter = 0; if(counter < uemail.length) { $("#sending_count").html("Processing Record "+ ecounter +" of " + an.length); var data = {uid: um[counter] $.ajax({ type: "POST", url: "save.php", data: data, success: function(html){ echo "added"; counter++; } what it do, it complete all the prcess but save.php is still working what i want after one process it stop untill process of save.php complete then it wait for next 10 sec and start adding of 2nd element. Thanks

    Read the article

  • Using key().id() on GAE Datastore and reverse geocoding

    - by Ivan Slaughter
    I have 6000 data of district, subdistrict. I need to represent this on dependent dropdown. The datamodel is for example; class Location(db.Model): location_name = db.StringProperty() location_parent = db.IntegerProperty() location_parent is reference to key() or id()? Still cannot decide which one is good. When i use key() as reference then using JSON to create jquery dependent drop down. My page loading/query and rendering time the dropdown is quite slow? Can i use key().id() as drop down option value to lighten up the page load? Any better solution for this parent/child reference for the drop down? For example: for the district record/entities the location-parent is null, for sub district record location_name will contain reference to parent district record. Other issue is to reverse geocoding the location to store or display geoPoint (lat,long)? Is google MAP API always find the exact lat,long of specific region boundaries or any error checking for the result?

    Read the article

< Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >