Search Results

Search found 607 results on 25 pages for 'tb selleo'.

Page 1/25 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • VB.NET - Convert Unicode in one TB to Shift-JIS in another TB

    - by Yiu Korochko
    Trying to develop a text editor, I've got two textboxes, and a button below each one. When the button below textbox1 is pressed, it is supposed to convert the Unicode text (intended to be Japanese) to Shift-JIS. The reason why I am doing this is because the software VOCALOID2 only allows ANSI and Shift-JIS encoding text to be pasted into the lyrics system. Users of the application normally have their keyboard set to change to Japanese already, but it types in Unicode. How can I convert Unicode text to Shift-JIS when SJIS isn't available in the System.Text.Encoding types?

    Read the article

  • Újabb Oracle TPC-H rekord 3 TB-on, a nem klaszterezett kategóriában

    - by Fekete Zoltán
    A TPC-H a döntéstámogatási, adattárházas, üzleti intelligencia rendszerek teljesítményét méri, www.tpc.org. Most a 3 TB-os méretben született új rekord a TPC-H teszten a non-clustered kategóriában az Oracle Database 11gR2-vel, Sun M9000 hardveren. „A nagy méretu rendszerekben elért TPC-H benchmark-rekordokkal az Oracle Database 11g továbbra is orzi vezeto helyét az adattárház-rendszerek között" - nyilatkozta Juan Loaiza, az Oracle rendszertechnológiáért felelos elso alelnöke. „Ez az eredmény bizonyítja, hogy az Oracle Database 11g és az Oracle Sun SPARC Enterprise M9000 kiszolgálója együttesen nagyteljesítményu alapot biztosít az ügyfelek adattárház-alkalmazásai számára." Az Oracle sajtóhír magyar nyelven: Az Oracle® Database 11g új világrekordot állított fel a Sun SPARC Enterprise M9000 kiszolgálón végzett három terabájtos fürtözés nélküli TPC-H sebességpróbán Az Oracle sajtóhír angol nyelven: Oracle® Database 11g Sets New World Record TPC-H Three Terabyte Non-Clustered Benchmark Result on Sun SPARC Enterprise M9000 Server

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • 1.5 TB USB Drive failed to mount

    - by user89348
    Seagate 1.5Tb FreeAgent USB Hard Drive. Formatted FAT32. I figure it is 75% full. Used to work fine in XUBUNTU it shows up in Cairo Dock but when I click on it I get "failed to mount drive'. Nautilus does not display the icon nor does Thunar. Windows Vista will no longer recognise drive either. Back Track 5R3 also no longer fails to mount it. BUT and here is the BIG BUT my Pioneer DV-410 reads the files and plays the everything just fine. I believe this all happened after an unclean shutdown / XUbuntu 12.10 system freeze. Why can't XUBUNTU mount this drive when a crappy 13 year old DVD player can mount it. I am desperate to back up the data before just in case the drive becomes completely unreadable. Using XUBUNTU 12.10 Quantal current 3.5.0.17 Kernel (past 3 Kernels wont read it either) and all newest apt-get update / dist-upgrade are applied. I will post any other info you folks request as needed. Additional info as requested by githlar. $ sudo fsck.vfat /dev/sdb dosfsck 3.0.13, 30 Jun 2012, FAT32, LFN Read 512 bytes at 0:Input/output error $ lsusb Bus 001 Device 003: ID 148f:3070 Ralink Technology, Corp. RT2870/RT3070 Wireless Adapter Bus 002 Device 002: ID 0bc2:3001 Seagate RSS LLC Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    Read the article

  • How to backup 20+TB of data?

    - by Jesus Fidalgo
    We have a NAS server at the company I work for that is being used for storing photography sessions. Each session is approximately 100gb. Over the last couple of years this server has accumulated 10+ TB of data, and we are increasing the amount of photoshoots exponentially. I estimate that by the end of next year we will have 20+ TB stored on this NAS. We are currently backing this server up to tape using LTO-5 tapes with Symantec BackupExec. Since the size of this server has grown, full backups of this server are not completing overnight. Does anyone have any suggestion on how to backup this amount of data? Should we be backing it up to tape? Are there any other options which may be better?

    Read the article

  • Data recovery; nearly 1 tb of movies on a WD 3.5 tb personal cloud drive disappears with scanty traces

    - by Effector Dhanushanth
    I have a great collection of movies that I had stored in a logical mesh of folder on my 3.5 tb WD personal cloud drive. I woke up 1 morning and found that everything was fine with my data on this drive, except for my movie collection: There were two great folders, one "2sort" nd the other "segregated". out of all the segregated sub folders, only letter C D and 2 or 3 others remain. and the 2 sort folder, which has umpteen subfolders, amounting to more than 0.5 tb. is.. it's just gone!! this is a great downfall.. now this is a personal cloud drive and has no usb port etc. unfortunately to hardwire and recover files.. now I'm sure there are softwares out there that can help me recover my beloved movies from such an interestingly "hard-to-reach" (should I say?) device? what may that software be compadre, my happiness lies within your answer.. thank you.. remember, recovery software or (WD) personal cloud. :) these ovies were All, "hand-picked", over the course of ten years.. I just never catalogued my collection.. if I could just get the "list" of my lost collection, that'd be enough.. recovering em would be a bonus.. but they out to be damaged if I were to somehow recover you know? still, I'm certain they're all intact.. I guess the file index just got corrupted.. There surely is a veil of some sort that need to be thrown or pushed aside to reveal my movies.. what software can do/does that? thanks immensely!

    Read the article

  • Windows 7 detects my 1 TB disk as a disk with only 31 MB space

    - by ZelluX
    I've added another 1 TB Western Digital disk on my computer (there is a disk with 250 GB already), and after booting to Windows 7, it recognise the disk, but in the Disk Management panel, it says the disk has only 31 MB TOTAL space, so is what it shows in the EVEREST information. And when I rebooted the computer and entered BIOS, it said the new disk has 0 MB disk capacity. Is there any way to fix this problem?

    Read the article

  • 3 TB HDD won't reactivate

    - by isif
    After doing a clean install of Windows 8 my Seagate 3 TB HDD won't reactivate in Disk Management. The two volumes are there but I can't use them for some reason. The drive was previously used with a GPT partition table, I can see the two spanned volumes but can't reactivate either. I backed up all my files from Windows 7 onto that drive and desperately need them back. What can and should I do to get the drive back up and running? When I go to Disk → Properties → Volumes, it claims the drive has a MBR partition style, so converting to GPT somehow without data loss should work. gDisk claims to be able to do that but when I point it to the drive, it claims that it has a GPT partition and a protected MBR partition. Any suggestions on what to do?

    Read the article

  • Best way to partition 1 TB (Linux and Windows 7)

    - by Simon
    Is there an intelligent way to partition 1 TB and be prepared for resizing/adding/deleting partitions? I was thinking about LVM, but as far as I remember, Windows 7 can't be installed on logical volume right? For now my plan is: - ~150 GB for Windows 7 and other stuff (Visual Studio..., maybe I'll split it 100/50 or something like that) - simple NTFS - 850 GB = LVM - disk for Linux (Ubuntu) and other stuff virtual machines, etc. I'm mostly interested in how and what tools should I use to get easy in maintain partitions for both systems.

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • 4TB HGST SATA drive only shows 1.62 TB in Windows Server 2012

    - by user136085
    I'm using a Supermicro X9SRE-3F motherboard with the latest BIOS and 2x 4TB drives connected to the on-board SATA controller. If I set the BIOS to RAID and create a RAID 1 array, the array shows up in the BIOS as 3.6TB. However when I boot Windows (on a separate RAID 1 array), the 4TB drives show up individually in disk manager as 2x 1.62TB drives. I could use Windows 2012 to set up software RAID 1, but when I set the BIOS back to 2x individual drives, they still show up in Windows as 2x 1.62TB drives. How do I access the full capacity of these drives? Thanks, Brian Bulaw

    Read the article

  • backup and file server for 50+ TB of data

    - by a-bomb
    our office wants to build a new server to handle our data, over the last 10 years our data was stored on CDs, DVDs, HDDs but now they want all of it in one place that is attached to the network for everybody in the office to access it. the data is 20TB new data and the rest is old, the important now is to store these 20tb and gradually store the other 30tb over time. so what is the best solution to do ? we thought of getting an hp server and connect it to an external enclosure that either had tape drives or HDDs (we haven;t decided yet) or to get a NAS server and connect it to the hp server. what should we do because this is new for us ...

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • How to speed up this simple mysql query?

    - by Jim Thio
    The query is simple: SELECT TB.ID, TB.Latitude, TB.Longitude, 111151.29341326*SQRT(pow(-6.185-TB.Latitude,2)+pow(106.773-TB.Longitude,2)*cos(-6.185*0.017453292519943)*cos(TB.Latitude*0.017453292519943)) AS Distance FROM `tablebusiness` AS TB WHERE -6.2767668133836 < TB.Latitude AND TB.Latitude < -6.0932331866164 AND FoursquarePeopleCount >5 AND 106.68123318662 < TB.Longitude AND TB.Longitude <106.86476681338 ORDER BY Distance See, we just look at all business within a rectangle. 1.6 million rows. Within that small rectangle there are only 67,565 businesses. The structure of the table is 1 ID varchar(250) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 2 Email varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 3 InBuildingAddress varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 4 Price int(10) Yes NULL Change Change Drop Drop More Show more actions 5 Street varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 6 Title varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 7 Website varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 8 Zip varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 9 Rating Star double Yes NULL Change Change Drop Drop More Show more actions 10 Rating Weight double Yes NULL Change Change Drop Drop More Show more actions 11 Latitude double Yes NULL Change Change Drop Drop More Show more actions 12 Longitude double Yes NULL Change Change Drop Drop More Show more actions 13 Building varchar(200) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 14 City varchar(100) utf8_unicode_ci No None Change Change Drop Drop More Show more actions 15 OpeningHour varchar(400) utf8_unicode_ci Yes NULL Change Change Drop Drop More Show more actions 16 TimeStamp timestamp on update CURRENT_TIMESTAMP No CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP Change Change Drop Drop More Show more actions 17 CountViews int(11) Yes NULL Change Change Drop Drop More Show more actions The indexes are: Edit Edit Drop Drop PRIMARY BTREE Yes No ID 1965990 A Edit Edit Drop Drop City BTREE No No City 131066 A Edit Edit Drop Drop Building BTREE No No Building 21 A YES Edit Edit Drop Drop OpeningHour BTREE No No OpeningHour (255) 21 A YES Edit Edit Drop Drop Email BTREE No No Email (255) 21 A YES Edit Edit Drop Drop InBuildingAddress BTREE No No InBuildingAddress (255) 21 A YES Edit Edit Drop Drop Price BTREE No No Price 21 A YES Edit Edit Drop Drop Street BTREE No No Street (255) 982995 A YES Edit Edit Drop Drop Title BTREE No No Title (255) 1965990 A YES Edit Edit Drop Drop Website BTREE No No Website (255) 491497 A YES Edit Edit Drop Drop Zip BTREE No No Zip (255) 178726 A YES Edit Edit Drop Drop Rating Star BTREE No No Rating Star 21 A YES Edit Edit Drop Drop Rating Weight BTREE No No Rating Weight 21 A YES Edit Edit Drop Drop Latitude BTREE No No Latitude 1965990 A YES Edit Edit Drop Drop Longitude BTREE No No Longitude 1965990 A YES The query took forever. I think there has to be something wrong there. Showing rows 0 - 29 ( 67,565 total, Query took 12.4767 sec)

    Read the article

  • IBM révolutionne le transfert de données avec la première puce nanophotonique, pouvant atteindre une vitesse de 1 Tb/s

    IBM révolutionne le transfert de données avec une puce nanophotonique pouvant atteindre une vitesse de 1 Tb/s L'innovation : un domaine dans lequel IBM fait beaucoup parler de lui par les inventions de ses laboratoires de recherche. Le géant de l'informatique vient de dévoiler une nouvelle technologie qui pourrait révolutionner le monde de l'informatique. La rapidité des ordinateurs est affectée par les goulots d'étranglement du transfert de données dans et entre les systèmes. La création d'IBM va permettre de transporter l'information à la vitesse de la lumière en se servant des impulsions au lieu du courant électrique. [IMG]http://rdonfa...

    Read the article

  • Does NTFS performance degrade significantly in volumes larger than five or six TB?

    - by Josh Yeager
    One of my customers is planning to set up a new document store, which will probably grow by 1-2TB per year. One of my co-workers says that Windows performance is extremely bad if it has a single NTFS volume that is bigger than five or six TB. He thinks that we need to set up their system with multiple volumes so that no single volume will exceed that limit. Is this a real problem? Does Windows or NTFS slow down when the volume size reaches several terabytes? Or is it possible to create a single volume of 10 or more TB?

    Read the article

  • foreach loop from multiple arrays c#

    - by Mike
    This should be a simple question. All I want to know is if there is a better way of coding this. I want to do a foreach loop for every array, without having to redeclare the foreach loop. Is there a way c# projects this? I was thinking of putting this in a Collection...? Please, critique my code. foreach (TextBox tb in vert) { if (tb.Text == box.Text) conflicts.Add(tb); } foreach (TextBox tb in hort) { if (tb.Text == box.Text) conflicts.Add(tb); } foreach (TextBox tb in cube) { if (tb.Text == box.Text) conflicts.Add(tb); }

    Read the article

  • Get the sum by comparing between two tables

    - by Ismail Gunes
    I have to tables ProdBiscuit As tb and StockData As sd , I have to get the sum of the quantity in StockData (quantite) with the condition of if (sd.status0 AND sd.prodid = tb.id AND sd.matcuisine = 3) Here is my sql query SELECT tb.id, tb.nom, tb.proddate, tb.qty, tb.stockrecno FROM ProdBiscuit AS tb JOIN (SELECT id, prodid, matcuisine, status, SUM(quantite) AS rq FROM StockData) AS sd ON (tb.id = sd.prodid AND sd.status > 0 AND sd.matcuisine = 3) LIMIT 25 OFFSET @Myid This gives me no rows at all ? There is only 3 rows in ProdBiscuit and 11 rows in Stockdata and there is only 2 rows in StockData good with the condition. And as shown in the picture there is only two rows which give the condition. What is wrong in my query ? PS: The green lines on the image shows the condition in my query.

    Read the article

  • Seeking (somewhat) better explanations about supporting > 2.1 TB hard drives.

    - by irrational John
    Today while Googling about I stumbled across posts claiming that Seagate plans to ship a 3TB drive sometime later in 2010. Unfortunately, the stuff I looked at all seemed to contain tidbits of info which I didn't think fit together properly. (I would link to some examples, but I'm only allowed 1 link per post at the moment). Now I really don't have any "need" to better understand the underlying tedious details of this. I am just curious. And confused. So ... some questions I'm hoping someone better informed than I might answer. The talk about a potential addressing problem in both the hardware and the software confused me. The assertion is that something called something called Long LBA addressing (LLBA) is needed in the Command Descriptor Block as a way to get around the current limits to access a hard drive bigger than ~2.1 (or ~2.2?) TB. OK, fine. But I thought the last time this problem came up it was solved by extending the length of the LBA field from 28 to 48 bits. (Remember this website? www.48bitlba.com) A 6 byte LBA is clearly large enough, so what's up with this LLBA talk. I thought this was all fixed back by Win XP SP2, if not sooner? And certainly all the hardware should be up to the task, shouldn't it? The real problem as I understand it with drives much bigger than 2 TB are the 4 byte LBA fields in the Master Boot Record (MBR) used to partition just about all hard drives at the moment. The most likely solution is to migrate to Intel's GUID Partition Table (GPT). A GPT uses 8 byte fields for the LBA. What I don't understand in this context is what is the problem with booting say Windows from a 3TB drive that uses a GPT. Granted, the current PC BIOS wouldn't know how to recognize or work with a GPT. But every GPT comes with a so-called "Safety" or "Guarding" MBR in sector 0.Apple already uses a hybrid version of the MBR to allow them to boot Windows on their Intel Macs (aka Boot Camp). Couldn't something similar be done to allow the PC BIOS to recognize and boot from a partition in, say, the first 1 GB of a 3GB or larger drive? I've got more questions such as where do 4K sectors fit into all of this. But it's probably time I just shut up and posted this. ;-) -irrational john

    Read the article

  • What free space thresholds/limits are advisable for 640 GB and 2 TB hard disk drives with ZEVO ZFS on OS X?

    - by Graham Perrin
    Assuming that free space advice for ZEVO will not differ from advice for other modern implementations of ZFS … Question Please, what percentages or amounts of free space are advisable for hard disk drives of the following sizes? 640 GB 2 TB Thoughts A standard answer for modern implementations of ZFS might be "no more than 96 percent full". However if apply that to (say) a single-disk 640 GB dataset where some of the files most commonly used (by VirtualBox) are larger than 15 GB each, then I guess that blocks for those files will become sub optimally spread across the platters with around 26 GB free. I read that in most cases, fragmentation and defragmentation should not be a concern with ZFS. Sill, I like the mental picture of most fragments of a large .vdi in reasonably close proximity to each other. (Do features of ZFS make that wish for proximity too old-fashioned?) Side note: there might arise the question of how to optimise performance after a threshold is 'broken'. If it arises, I'll keep it separate. Background On a 640 GB StoreJet Transcend (product ID 0x2329) in the past I probably went beyond an advisable threshold. Currently the largest file is around 17 GB –  – and I doubt that any .vdi or other file on this disk will grow beyond 40 GB. (Ignore the purple masses, those are bundles of 8 MB band files.) Without HFS Plus: the thresholds of twenty, ten and five percent that I associate with Mobile Time Machine file system need not apply. I currently use ZEVO Community Edition 1.1.1 with Mountain Lion, OS X 10.8.2, but I'd like answers to be not too version-specific. References, chronological order ZFS Block Allocation (Jeff Bonwick's Blog) (2006-11-04) Space Maps (Jeff Bonwick's Blog) (2007-09-13) Doubling Exchange Performance (Bizarre ! Vous avez dit Bizarre ?) (2010-03-11) … So to solve this problem, what went in 2010/Q1 software release is multifold. The most important thing is: we increased the threshold at which we switched from 'first fit' (go fast) to 'best fit' (pack tight) from 70% full to 96% full. With TB drives, each slab is at least 5GB and 4% is still 200MB plenty of space and no need to do anything radical before that. This gave us the biggest bang. Second, instead of trying to reuse the same primary slabs until it failed an allocation we decided to stop giving the primary slab this preferential threatment as soon as the biggest allocation that could be satisfied by a slab was down to 128K (metaslab_df_alloc_threshold). At that point we were ready to switch to another slab that had more free space. We also decided to reduce the SMO bonus. Before, a slab that was 50% empty was preferred over slabs that had never been used. In order to foster more write aggregation, we reduced the threshold to 33% empty. This means that a random write workload now spread to more slabs where each one will have larger amount of free space leading to more write aggregation. Finally we also saw that slab loading was contributing to lower performance and implemented a slab prefetch mechanism to reduce down time associated with that operation. The conjunction of all these changes lead to 50% improved OLTP and 70% reduced variability from run to run … OLTP Improvements in Sun Storage 7000 2010.Q1 (Performance Profiles) (2010-03-11) Alasdair on Everything » ZFS runs really slowly when free disk usage goes above 80% (2010-07-18) where commentary includes: … OpenSolaris has changed this in onnv revision 11146 … [CFT] Improved ZFS metaslab code (faster write speed) (2010-08-22)

    Read the article

  • Not Playing Nice Together

    - by David Douglass
    One of the things I’ve noticed is that two industry trends are not playing nice together, those trends being multi-core CPUs and massive hard drives.  It’s not a problem if you keep your cores busy with compute intensive work, but for software developers the beauty of multi-core CPUs (along with gobs of RAM and a 64 bit OS) is virtualization.  But when you have only one hard drive (who needs another when it holds 2 TB of data?) you wind up with a serious hard drive bottleneck.  A solid state drive would definitely help, and might even be a complete solution, but the cost is ridiculous.  Two TB of solid state storage will set you back around $7,000!  A spinning 2 TB drive is only $150. I see a couple of solutions for this.  One is the mainframe concept of near and far storage: put the stuff that will be heavily access on a solid state drive and the rest on a spinning drive.  Another solution is multiple spinning drives.  Instead of a single 2 TB drive, get four 500 GB drives.  In total, the four 500 GB drives will cost about $100 more than the single 2 TB drive.  You’ll need to be smart about what drive you place things on so that the load is spread evenly.  Another option, for better performance, would be four 10,000 RPM 300 GB drives, but that would cost about $800 more than the singe 2 TB drive and would deliver only 1.2 TB of space. All pricing based on Microcenter as of March 14, 2010.

    Read the article

  • MVVM- How can I bind to a property, which is not a DependancyProperty?

    - by highone
    I have found this question http://stackoverflow.com/questions/2245928/mvvm-and-the-textboxs-selectedtext-property. However, I am having trouble getting the solution given to work. This is my non-working code: View: SelectedText and Text are just string properties from my ViewModel. <TextBox Text="{Binding Path=Text, UpdateSourceTrigger=PropertyChanged}" Height="155" HorizontalAlignment="Left" Margin="68,31,0,0" Name="textBox1" VerticalAlignment="Top" Width="264" AcceptsReturn="True" AcceptsTab="True" local:TextBoxHelper.SelectedText="{Binding SelectedText, UpdateSourceTrigger=PropertyChanged}" /> <TextBox Text="{Binding SelectedText, Mode=OneWay, UpdateSourceTrigger=PropertyChanged}" Height="154" HorizontalAlignment="Left" Margin="82,287,0,0" Name="textBox2" VerticalAlignment="Top" Width="239" /> TextBoxHelper public static class TextBoxHelper { #region "Selected Text" public static string GetSelectedText(DependencyObject obj) { return (string)obj.GetValue(SelectedTextProperty); } public static void SetSelectedText(DependencyObject obj, string value) { obj.SetValue(SelectedTextProperty, value); } // Using a DependencyProperty as the backing store for SelectedText. This enables animation, styling, binding, etc... public static readonly DependencyProperty SelectedTextProperty = DependencyProperty.RegisterAttached( "SelectedText", typeof(string), typeof(TextBoxHelper), new FrameworkPropertyMetadata(null, FrameworkPropertyMetadataOptions.BindsTwoWayByDefault, SelectedTextChanged)); private static void SelectedTextChanged(DependencyObject obj, DependencyPropertyChangedEventArgs e) { TextBox tb = obj as TextBox; if (tb != null) { if (e.OldValue == null && e.NewValue != null) { tb.SelectionChanged += tb_SelectionChanged; } else if (e.OldValue != null && e.NewValue == null) { tb.SelectionChanged -= tb_SelectionChanged; } string newValue = e.NewValue as string; if (newValue != null && newValue != tb.SelectedText) { tb.SelectedText = newValue as string; } } } static void tb_SelectionChanged(object sender, RoutedEventArgs e) { TextBox tb = sender as TextBox; if (tb != null) { SetSelectedText(tb, tb.SelectedText); } } #endregion } What am I doing wrong?

    Read the article

  • Getting values from Dynamic elements.

    - by nCdy
    I'm adding some dynamic elements to my WebApp this way : (Language used is Nemerele (It has a simple C#-like syntax)) unless (GridView1.Rows.Count==0) { foreach(index with row = GridView1.Rows[index] in [0..GridView1.Rows.Count-1]) { row.Cells[0].Controls.Add ({ def TB = TextBox(); TB.EnableViewState = false; unless(row.Cells[0].Text == "&nbsp;") { TB.Text = row.Cells[0].Text; row.Cells[0].Text = ""; } TB.ID=TB.ClientID; TB.Width = 60; TB }); row.Cells[0].Controls.Add ({ def B = Button(); B.EnableViewState = false; B.Width = 80; B.Text = "?????????"; B.UseSubmitBehavior=false; // Makes no sense //B.OnClientClick="select(5);"; // HERE I CAN KNOW ABOUT TB.ID //B.Click+=EventHandler(fun(_,_) : void { }); // POST BACK KILL THAT ALL B }); } } This textboxes must make first field of GridView editable so ... but now I need to save a values. I can't do it on server side because any postback will Destroy all dynamic elements so I must do it without Post Back. So I try ... <script type="text/javascript" src="Scripts/jquery-1.4.1.min.js"></script> <script type="text/javascript"> function CallPageMethod(methodName, onSuccess, onFail) { var args = ''; var l = arguments.length; if (l > 3) { for (var i = 3; i < l - 1; i += 2) { if (args.length != 0) args += ','; args += '"' + arguments[i] + '":"' + arguments[i + 1] + '"'; } } var loc = window.location.href; loc = (loc.substr(loc.length - 1, 1) == "/") ? loc + "Report.aspx" : loc; $.ajax({ type: "POST", url: loc + "/" + methodName, data: "{" + args + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: onSuccess, fail: onFail }); } function select(index) { var id = $("#id" + index).html(); CallPageMethod("SelectBook", success, fail, "id",id); } function success(response) { alert(response.d); } function fail(response) { alert("&#1054;&#1096;&#1080;&#1073;&#1082;&#1072;."); } </script> So... here is a trouble string : var id = $("#id" + index).html(); I know what is ID here : TB.ID=TB.ClientID; (when I add it) but I have no idea how to send it on Web Form. If I can add something like this div : <div id="Result" onclick="select(<%= " TB.ID " %>);"> Click here. </div> from the code it will be really goal, but I can't add this element as from CodeBehind as a dynamic element. So how can I transfer TB.ID or TB.ClientID to some static div Or how can I add some clickable dynamic element without PostBack to not destroy all my dynamic elements. Thank you.

    Read the article

  • transfer Thunderbird (17) Profile on Win7 to Ubuntu 12.04

    - by William Curran
    I want to transfer Thunderbird Profile on Win7 to thunderbird (17) Ubuntu 12.04. I already copied the profile folder from Windows to Ubuntu and modified progile.ini on the ubuntu machine to include [Profile] Name=Bill IsRelative=1 Path=(the name of the transfered profile folder) I think the problem is that the Win. TB profile content (files and folder structure) look VERY different that the that of the unbuntu TB profile's that was created on installation. The Ubuntu install is new where as the Win TB has undergone many updates. Seems the system for profile storage has changed drastically. I tried to start TB in safe mode but could't get the path correct to start TB in the terminal with the -safe-mode switch. What can I do? Bill

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >