Search Results

Search found 28443 results on 1138 pages for 'partition table'.

Page 380/1138 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • How to print a Perl 2-dimensional array?

    - by Matt Pascoe
    I am trying to write a simple Perl script that reads a *.csv, places the rows of the *.csv file in a two dimensional array, and then prints on item out of the array and then prints a row of the array. #!/usr/bin/perl use strict; use warnings; open(CSV, $ARGV[0]) || die("Cannot open the $ARGV[0] file: $!"); my @row; my @table; while(<CSV>) { @row = split(/\s*,\s*/, $_); push(@table, @row); } close CSV || die $!; foreach my $element ( @{ $table[0] } ) { print $element, "\n"; } print "$table[0][1]\n"; When I run this script I receive the following error and nothing prints: Can't use string ("1") as an ARRAY ref while "strict refs" in use at ./scripts.pl line 16. I have looked in a number of other forums and am still not sure how to fix this issue. Can anyone help me fix this script?

    Read the article

  • How do you deal with denormalization / secondary indexes in database sharding?

    - by Continuation
    Say I have a "message" table with 2 secondary indexes: "recipient_id" "sender_id" I want to shard the "message" table by "recipient_id". That way to retrieve all messages sent to a certain recipient I only need to query one shard. But at the same time, I want to be able to make a query that ask for all messages sent by a certain sender. Now I don't want to send that query to every single shard of the "message" table. One way to do this is to duplicate the data and have a "message_by_sender" table sharded by "sender_id". The problem with that approach is that every time a message has been sent, I need to insert the message into both "message" and "message_by_sender" tables. But what if after inserting into "message" the insertion into "message_by_sender" fail? In that case the message exists in "message" but not in "message_by_sender". How do I make sure that if a message exists in "message" then it also exists in "message_by_sender" without resorting to 2 phase commit? This must be a very common issue for anyone who shards their databases. How do you deal woth it?

    Read the article

  • Best choice for off-site backup: dd vs tar

    - by plok
    I have two 1TB single-partition hard disks configured as RAID1, of which I would like to make an off-site backup on a third disk, which I am still to buy. The idea is to store the backup at a relative's house, considerably far away from my place, in the hope that all the information will be safe in the case of a global thermonuclear apocalypse. Of course, this backup would be well encrypted. What I still have to decide is whether I am going to simply tar the entire partition or, instead, use dd to create an image of the disks. Is there any non-trivial difference between these two approaches that I could be overlooking? This off-site backup would be updated no more than two or three times a year, in the best of the cases, so performance should not be a factor to be pondered at all. What, and why, would you use if you were me? dd, tar, or a third option?

    Read the article

  • dual boot preinstalled windows 8 laptop with windows 7

    - by sarathi
    I am having a hard disk in gpt partition style and Windows 8 is pre-installed in uefi mode. While trying to dual boot using usb bootable disk(using rufus,gpt partition style,fat32),it displays "Windows is loading files", then while displaying "Starting Windows" it hangs. So, I tried to install it inside Windows using setup.exe. Everything was running fine, but when it gets self restarted again, it got stuck at "Starting Windows". When restarted in Windows 8, it showed a htm file stating that "Windows cannot be installed on a computer using battery power. If the battery runs out of power during the installation, you might lose data. To continue the installation, plug in the computer's power adapter." I am sure that power adapter is connected. I Googled a lot on this, but I didn't get solution.

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • Asus AT3-IONT & Win XP

    - by user59178
    Hello! I'm trying to install Windows XP on my nettop (Asus AT3-IONT motherboard). There is a problem - when the progress comes to partition selection, it sees all the partitions on the hard drive, but when I'm selecting any partition, it says that it's not compatible with Windows XP. Windows 7 was installed without any problems, but XP and 2003 R2 have this problem. How can I solve it? P.S. I already have Windows 7, I need exactly XP or 2003 for my work. Thanks.

    Read the article

  • Warning message in boot.ini

    - by MA1
    Hi everyone, I have a dual boot system with Windows XP Pro and Windows 7. Following are the contents of my system's boot.ini. ;Warning: Boot.ini is used on Windows XP and earlier operating systems. ;Warning: Use BCDEDIT.exe to modify Windows Vista boot options. ; [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /NOEXECUTE=OPTIN /FASTDETECT I just want to know about first two warning lines, whether these two lines are always present in dual boot system when the boot process is different for installed operating systems, for example xp + vista/w7 or windows2000 + vista/w7 etc? Regards,

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • Is it possible to re-lock a bitlocker drive?

    - by Sean Edwards
    I'm running a partition with bitlocker on a Windows 7 Ultimate machine, which contains secure data that I have to recover infrequently. Unlocking it to access the data is obviously no problem, but is there a way to re-lock the partition when I'm done? The best I've found so far is this: http://social.technet.microsoft.com/Forums/en-US/w7itprosecurity/thread/41607938-7452-440d-8253-67fe8657bc0f Currently I have a .bat script on that drive that I can run as administrator, and that re-locks the drive, but it feels like kind of a hackish solution. Does anyone have anything better? Any idea when Microsoft might release a fix for this?

    Read the article

  • phpmyadmin shows numbers or blob for mysql's utf8_bin callation columns?

    - by marc40000
    Hi ! I have a table with a varchar column. Its collation is set to utf8_bin. My software using this table and column works perfectly. But when I look at the content in phpmyadmin, I only see some hex values or [Blob xB]. Can I make phpmyadmin show the content correctly? Besides, when I set the collation to utf8_general_ci or utf8_unicode_ci, the phpmyadmin shows the content correctly. Thx Marc [edit]Hah, I found out, there is a small "+Options" link above every table in phpmyadmin. It opens several options including "Show BLOB contents" - which makes the [blob] to readable text when enabled and "Show binary contents as HEX" which shows the hex codes as text when disabled. No idea why there are two options though and why sometimes there is a [Blob] and sometimes hex values. Well. Now I'm still wondering: Setting these options get lost when I go to another table. I have to set them every time I go there. Is there a way to save those options? [/edit]

    Read the article

  • Adding Ubuntu installation to VIsta boot loader

    - by frapfap
    Hi, I had a Vista partition and created a partition and installed Ubuntu 9.10. During the Ubuntu installation I unchecked "Install Boot Loader" so it didn't install the GRUB bootloader. I wanted to keep Vistas boot loader so I could manage it within Vista as I know you can - Ive just forgot where in the Control Panel you do it! Anyway for some reason I incorrectly assumed that the Ubuntu entry would be added to the Vista boot loader. How do enable to choose which OS to use during booting up the computer as at the moment it just automatically loads Vista? Apologies if I'm technically incorrect - what I explained is what I thought was going on!! Thanks.

    Read the article

  • Help creating image from LVM

    - by jackhab
    I need to duplicate CentOS hard drive image for multiple stations. The HD has the following layout: Disk /dev/sdb: 250GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 250GB 250GB primary lvm I saved /dev/sdb1 to file with fsarchiver but for sdb2 I get: /fsarchiver savefs an2.fsa /dev/sdb2 oper_save.c#1006,filesystem_mount_partition(): can't detect and mount filesystem of partition [/dev/sdb2], cannot continue. removed an2.fsa Although fsarchiver probe simple correctly detects sdb2 as LVM2_member. Is fsarchiver correct tool for this job? What's wrong? I'm on Ubuntu 9.1 with fsarchiver 0.6.8 and lvm tools installed. Thanks.

    Read the article

  • Help creating image from LVM

    - by jackhab
    I need to duplicate CentOS hard drive image for multiple stations. The HD has the following layout: Disk /dev/sdb: 250GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 250GB 250GB primary lvm I saved /dev/sdb1 to file with fsarchiver but for sdb2 I get: /fsarchiver savefs an2.fsa /dev/sdb2 oper_save.c#1006,filesystem_mount_partition(): can't detect and mount filesystem of partition [/dev/sdb2], cannot continue. removed an2.fsa Although fsarchiver probe simple correctly detects sdb2 as LVM2_member. Is fsarchiver correct tool for this job? What's wrong? I'm on Ubuntu 9.1 with fsarchiver 0.6.8 and lvm tools installed. Thanks.

    Read the article

  • How do I set YUI2 paginator to select a page other than the first page?

    - by Jeremy Weathers
    I have a YUI DataTable (YUI 2.8.0r4) with AJAX pagination. Each row in the table links to a details/editing page and I want to link from that details page back to the list page that includes the record from the details page. So I have to a) offset the AJAX data correctly and b) tell YAHOO.widget.Paginator which page to select. According to my reading of the YUI API docs, I have to pass in the initialPage configuration option. I've attempted this, but it doesn't take (the data from AJAX is correctly offset, but the paginator thinks I'm on page 1, so clicking "next" takes me from e.g. page 6 to page 2. What am I not doing (or doing wrong)? Here's my DataTable building code: (function() { var columns = [ {key: "retailer", label: "Retailer", sortable: false, width: 80}, {key: "publisher", label: "Publisher", sortable: false, width: 300}, {key: "description", label: "Description", sortable: false, width: 300} ]; var source = new YAHOO.util.DataSource("/sales_data.json?"); source.responseType = YAHOO.util.DataSource.TYPE_JSON; source.responseSchema = { resultsList: "records", fields: [ {key: "url"}, {key: "retailer"}, {key: "publisher"}, {key: "description"} ], metaFields: { totalRecords: "totalRecords" } }; var LoadingDT = function(div, cols, src, opts) { LoadingDT.superclass.constructor.call( this, div, cols, src, opts); // hide the message tbody this._elMsgTbody.style.display = "none"; }; YAHOO.extend(LoadingDT, YAHOO.widget.DataTable, { showTableMessage: function(msg) { $('sales_table_overlay').clonePosition($('sales_table').down('table')). show(); }, hideTableMessage: function() { $('sales_table_overlay').hide(); } }); var table = new LoadingDT("sales_table", columns, source, { initialRequest: "startIndex=125&results=25", dynamicData: true, paginator: new YAHOO.widget.Paginator({rowsPerPage: 25, initialPage: 6}) }); table.handleDataReturnPayload = function(oRequest, oResponse, oPayload) { oPayload.totalRecords = oResponse.meta.totalRecords; return oPayload; }; })();

    Read the article

  • Regular Expression doesn't match

    - by dododedodonl
    Hi All, I've got a string with very unclean HTML. Before I parse it, I want to convert this: <TABLE><TR><TD width="33%" nowrap=1><font size="1" face="Arial"> NE </font> </TD> <TD width="33%" nowrap=1><font size="1" face="Arial"> DEK </font> </TD> <TD width="33%" nowrap=1><font size="1" face="Arial"> 143 </font> </TD> </TR></TABLE> in NE DEK 143 so it is a bit easier to parse. I've got this regular expression (RegexKitLite): NSString *str = [dataString stringByReplacingOccurrencesOfRegex:@"<TABLE><TR><TD width=\"33%\" nowrap=1><font size=\"1\" face=\"Arial\">(.+?)<\\/font> <\\/TD>(.+?)<TD width=\"33%\" nowrap=1><font size=\"1\" face=\"Arial\">(.+?)<\\/font> <\\/TD>(.+?)<TD width=\"33%\" nowrap=1><font size=\"1\" face=\"Arial\">(.+?)<\\/font> <\\/TD>(.+?)<\\/TR><\\/TABLE>" withString:@"$1 $3 $5"]; I'm no an expert in Regex. Can someone help me out here? Regards, dodo

    Read the article

  • There are no drives listed during windows 7 system recovery

    - by Kragen
    I'm trying to use the Windows 7 system recovery disk to repair a boot sector, however I'm finding that when I boot the recovery disk, it doesn't list / mount any of my disk partitions, and so I can't perform the recovery. The partitions are all NTFS formatted, and the drivers used to read the disks all seem to be fairly straightforward Microsoft drivers, so I shouldn't need to load any extra drivers to see my partitions (its a Dell Latitude D530) Diskpart correctly lists the partitions (complete with labels) - it just that when I attempt to switch to that partition it gives me the "This partition does not contain a recognised file system" message. Has anyone got any idea how I can work out why my partitions are not visible?

    Read the article

  • Asus AT3-IONT & Win XP [closed]

    - by user59178
    Hello! I'm trying to install Windows XP on my nettop (Asus AT3-IONT motherboard). There is a problem - when the progress comes to partition selection, it sees all the partitions on the hard drive, but when I'm selecting any partition, it says that it's not compatible with Windows XP. Windows 7 was installed without any problems, but XP and 2003 R2 have this problem. How can I solve it? P.S. I already have Windows 7, I need exactly XP or 2003 for my work. Thanks.

    Read the article

  • How to format HDD with MHDD Tool.

    - by PP
    I have copied MHDD application on a bootable CD and i wan to use it to low level format my HDD which had some bad sectors on it. so how can mark these badsecotrs using MHDD application? Pluse can i partition/format my HDD using MHDD because after low level format i want to setup window xp on that drive. can some one explain me step by step procedure to how to lovel format driven and partition/format it and then setup xp on that. Link to MHDD app. http://hddguru.com/content/en/software/2005.10.02-MHDD/ Thanks, PP.

    Read the article

  • NHibernate update using composite key

    - by Mahesh
    Hi, I have a table defnition as given below: License ClientId Type Total Used ClientId and Type together uniquely identifies a row. I have a mapping file as given below: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" auto-import="true"> <class name="Acumen.AAM.Domain.Model.License, Acumen.AAM.Domain" lazy="false" table="License"> <id name="ClientId" access="field" column="ClientID" /> <property name="Total" access="field" column="Total"/> <property name="Used" access="field" column="Used"/> <property name="Type" access="field" column="Type"/> </class> </hibernate-mapping> If a client used a license to create a user, I need to update the Used column in the table. As I set ClientId as the id column for this table, I am getting TooManyRowsAffectedException. could you please let me know how to set a composite key at mapping level so that NHibernate can udpate based on ClientId and Type. Something like: Update License SET Used=Used-1 WHERE ClientId='xxx' AND Type=1 Please help. Thanks, Mahesh

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • Integer Surrogate Key?

    - by CitadelCSAlum
    I need something real simple, that for some reason I am unable to accomplish to this point. I have also been unable to Google the answer surprisingly. I need to number the entries in different tables uniquely. I am aware of AUTO INCREMENT in MySQL and that it will be involved. My situation would be as follows If I had a table like Table EXAMPLE ID - INTEGER FIRST_NAME - VARCHAR(45) LAST_NAME - VARCHAR(45) PHONE - VARCHAR(45) CITY - VARCHAR(45) STATE - VARCHAR(45) ZIP - VARCHAR(45) This would be the setup for the table where ID is an integer that is auto-incremented every time an entry is inserted into the table. The thing I need is that I do not want to have to account for this field when inserting data into the database. From my understanding this would be a surrogate key, that I can tell the database to automatically increment and I do not have to include it in the INSERT STATEMENT so instead of INSERT INTO EXAMPLE VALUES (2,'JOHN','SMITH',333-333-3333,'NORTH POLE'.... I can leave out the first ID column and just write something like INSERT INTO EXAMPLE VALUES ('JOHN','SMITH'.....etc) Notice I Wouldnt have to define the ID column... I know this is a very common task to do, but for some reason I cant get to the bottom of it. I am using MySQL, just to clarify. Thanks alot

    Read the article

  • sybase - fails to use index unless string is hard-coded

    - by Garrett
    I'm using Sybase 12.5.3 (ASE); I'm new to Sybase though I've worked with MSSQL pretty extensively. I'm running into a scenario where a stored procedure is really very slow. I've traced the issue to a single SELECT stmt for a relatively large table. Modifying that statement dramatically improves the performance of the procedure (and reverting it drastically slows it down; i.e., the SELECT stmt is definitely the culprit). -- Sybase optimizes and uses multi-column index... fast!<br> SELECT ID,status,dateTime FROM myTable WHERE status in ('NEW','SENT') ORDER BY ID -- Sybase does not use index and does very slow table scan<br> SELECT ID,status,dateTime FROM myTable WHERE status in (select status from allowableStatusValues) ORDER BY ID The code above is an adapted/simplified version of the actual code. Note that I've already tried recompiling the procedure, updating statistics, etc. I have no idea why Sybase ASE would choose an index only when strings are hard-coded and choose a table scan when choosing from another table. Someone please give me a clue, and thank you in advance.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >