Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 914/2386 | < Previous Page | 910 911 912 913 914 915 916 917 918 919 920 921  | Next Page >

  • How to use connectify to share files between 2 laptops, one running Windows 7?

    - by p2pnode
    I have 2 laptops one running XP and another Windows 7. Both have WiFi-card installed. I want to be able to share files between these 2 machines and then in next step also be able to share internet connection. I have heard that Connectify on Windows 7 is a very handy and easy tool to set up a hotspot and other machines even XP,Vista can connect to this host hotspot and share files and internet connection. I am able to search for the network (set on host) on my client machine. But how to share files? I don't see any such menu or anything. Also after I have installed connectify on Windows 7, I am not able to connect to internet using data card. It throws error that "Error 31:A A device attached to the system is not functioning". And on client machine, if I connect to data card, as well as connect to wireless network setup by host machine, I no longer am able to access internet, even though the data card is connected. The browser throws error: Please help. Any other utility similar to connectify etc?

    Read the article

  • In Windows 7, why can't I use perfmon against a remote server?

    - by SomeGuy
    I am on Windows 7 and trying to run perfmon against Windows 2003 and Windows 2008 servers. I am running into the same issue with all remote machines. When creating a data collector set, I specify a domain account that is in the administrators group on the remote machines (and "Performance Log Users" and "Performance Monitor Users" to be safe). On the "Available Counters" screen, When I type in a remote computer name, PerfMon locks up for a good 2-3 minutes before I can add any counters. I can then save the collector set. However, when I save it, the go/stop buttons are disabled if I click the set in the left panel, and missing if I click the Data collector set itself in the right panel. See the screens below. I can run data collector sets against my local machine with no problem. I am opening perfmon with my local account in both scenarios. I also have Remote Registry Service started on each remote machine. What is going on?

    Read the article

  • Casting interfaces in java

    - by owca
    I had two separate interfaces, one 'MultiLingual' for choosing language of text to return, and second 'Justification' to justify returned text. Now I need to join them, but I'm stuck at 'java.lang.ClassCastException' error. Class Book is not important here. Data.length and FormattedInt.width correspond to the width of the line. The code : public interface MultiLingual { static int ENG = 0; static int PL = 1; String get(int lang); int setLength(int length); } class Book implements MultiLingual { private String title; private String publisher; private String author; private int jezyk; public Book(String t, String a, String p, int y){ this(t, a, p, y, 1); } public Book(String t, String a, String p, int y, int lang){ title = t; author = a; publisher = p; jezyk = lang; } public String get(int lang){ jezyk = lang; return this.toString(); } public int setLength(int i){ return 0; } @Override public String toString(){ String dane; if (jezyk == ENG){ dane = "Author: "+this.author+"\n"+ "Title: "+this.title+"\n"+ "Publisher: "+this.publisher+"\n"; } else { dane = "Autor: "+this.author+"\n"+ "Tytul: "+this.title+"\n"+ "Wydawca: "+this.publisher+"\n"; } return dane; } } class Data implements MultiLingual { private int day; private int month; private int year; private int jezyk; private int length; public Data(int d, int m, int y){ this(d, m, y, 1); } public Data(int d, int m, int y, int lang){ day = d; month = m; year = y; jezyk = lang; } public String get(int lang){ jezyk = lang; return this.toString(); } public int setLength(int i){ length = i; return length; } @Override public String toString(){ String dane=""; String miesiac=""; String dzien=""; switch(month){ case 1: miesiac="January"; case 2: miesiac="February"; case 3: miesiac="March"; case 4: miesiac="April"; case 5: miesiac="May"; case 6: miesiac="June"; case 7: miesiac="July"; case 8: miesiac="August"; case 9: miesiac="September"; case 10: miesiac="October"; case 11: miesiac="November"; case 12: miesiac="December"; } if(day==1){ dzien="st"; } if(day==2 || day==22){ dzien="nd"; } if(day==3 || day==23){ dzien="rd"; } else{ dzien="th"; } if(jezyk==ENG){ dane =this.day+dzien+" of "+miesiac+" "+this.year; } else{ dane = this.day+"."+this.month+"."+this.year; } return dane; } } interface Justification { static int RIGHT=1; static int LEFT=2; static int CENTER=3; String justify(int just); } class FormattedInt implements Justification { private int liczba; private int width; private int wyrownanie; public FormattedInt(int s, int i){ liczba = s; width = i; wyrownanie = 2; } public String justify(int just){ wyrownanie = just; String wynik=""; String tekst = Integer.toString(liczba); int len = tekst.length(); int space_left=width - len; int space_right = space_left; int space_center_left = (width - len)/2; int space_center_right = width - len - space_center_left -1; String puste=""; if(wyrownanie == LEFT){ for(int i=0; i<space_right; i++){ puste = puste + " "; } wynik = tekst+puste; } else if(wyrownanie == RIGHT){ for(int i=0; i<space_left; i++){ puste = puste + " "; } wynik = puste+tekst; } else if(wyrownanie == CENTER){ for(int i=0; i<space_center_left; i++){ puste = puste + " "; } wynik = puste + tekst; puste = " "; for(int i=0; i< space_center_right; i++){ puste = puste + " "; } wynik = wynik + puste; } return wynik; } } And the test code that shows this casting "(Justification)gatecrasher[1]" which gives me errors : MultiLingual gatecrasher[]={ new Data(3,12,1998), new Data(10,6,1924,MultiLingual.ENG), new Book("Sekret","Rhonda Byrne", "Nowa proza",2007), new Book("Tuesdays with Morrie", "Mitch Albom", "Time Warner Books",2003, MultiLingual.ENG), }; gatecrasher[0].setLength(25); gatecrasher[1].setLength(25); Justification[] t={ new FormattedInt(345,25), (Justification)gatecrasher[1], (Justification)gatecrasher[0], new FormattedInt(-7,25) }; System.out.println(" 10 20 30"); System.out.println("123456789 123456789 123456789"); for(int i=0;i < t.length;i++) System.out.println(t[i].justify(Justification.RIGHT)+"<----\n");

    Read the article

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • How do I merge a transient entity with a session using Castle ActiveRecordMediator?

    - by Daniel T.
    I have a Store and a Product entity: public class Store { public Guid Id { get; set; } public int Version { get; set; } public ISet<Product> Products { get; set; } } public class Product { public Guid Id { get; set; } public int Version { get; set; } public Store ParentStore { get; set; } public string Name { get; set; } } In other words, I have a Store that can contain multiple Products in a bidirectional one-to-many relationship. I'm sending data back and forth between a web browser and a web service. The following steps emulates the communication between the two, using session-per-request. I first save a new instance of a Store: using (new SessionScope()) { // this is data from the browser var store = new Store { Id = Guid.Empty }; ActiveRecordMediator.SaveAndFlush(store); } Then I grab the store out of the DB, add a new product to it, and then save it: using (new SessionScope()) { // this is data from the browser var product = new Product { Id = Guid.Empty, Name = "Apples" }); var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); ActiveRecordMediator.SaveAndFlush(store); } Up to this point, everything works well. Now I want to update the Product I just saved: using (new SessionScope()) { // data from browser var product = new Product { Id = Guid.Empty, Version = 1, Name = "Grapes" }; var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); // throws exception on this call ActiveRecordMediator.SaveAndFlush(store); } When I try to update the product, I get the following exception: a different object with the same identifier value was already associated with the session: 00000000-0000-0000-0000-000000000000, of entity:Product" As I understand it, the problem is that when I get the Store out of the database, it also gets the Product that's associated with it. Both entities are persistent. Then I tried to save a transient Product (the one that has the updated info from the browser) that has the same ID as the one that's already associated with the session, and an exception is thrown. However, I don't know how to get around this problem. If I could get access to a NHibernate.ISession, I could call ISession.Merge() on it, but it doesn't look like ActiveRecordMediator has anything similar (SaveCopy threw the same exception). Does anyone know the solution? Any help would be greatly appreciated!

    Read the article

  • DRBD on a disk with existing file system that takes all the place

    - by Karolis T.
    I'm currently trying to simulate the environment via XEN. I have installed two debian systems with such FS layout: cltest1:/etc# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 6.0G 417M 5.2G 8% / tmpfs 257M 0 257M 0% /lib/init/rw udev 10M 16K 10M 1% /dev tmpfs 257M 4.0K 257M 1% /dev/shm Host cltest2 is identical. Here's my drbd.conf global { minor-count 1; } resource mysql { protocol C; syncer { rate 10M; # 10 Megabytes } on cltest1 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.186:7789; meta-disk internal; } on cltest2 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.187:7789; meta-disk internal; } } I have not created filesystem on drbd0 Starting DRBD via init.d script errors out with: Starting DRBD resources: [ d(mysql) /dev/drbd0: Failure: (114) Lower device is already claimed. This usually means it is mounted. [mysql] cmd /sbin/drbdsetup /dev/drbd0 disk /dev/xvda2 /dev/xvda2 internal --set-defaults --create-device failed - continuing! Running: drbdadm create-md mysql gives: cltest1:/etc# drbdadm create-md mysql md_offset 6442446848 al_offset 6442414080 bm_offset 6442217472 Found ext3 filesystem which uses 6291456 kB current configuration leaves usable 6291228 kB Device size would be truncated, which would corrupt data and result in 'access beyond end of device' errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command 'drbdmeta /dev/drbd0 v08 /dev/xvda2 internal create-md' terminated with exit code 40 drbdadm aborting As I understand, all of my problems are because I don't have unallocated disk space on xvda2. What are my options besides shrinking FS and connecting a separate physical disk? Can't the meta-data be stored on a file in the local filesystem?

    Read the article

  • error handling with NSURLConnection sendSynchronousRequest

    - by Nnp
    how can i do better error handling with NSURLConnection sendSynchronousRequest? is there any way i can implement - (void)connection:(NSURLConnection *)aConn didFailWithError:(NSError *)error i have a nsoperation queue which is getting data in background thats why i have sync request. and if i have to implement async request then how can i wait for request to get complete. because that method can't proceed without data.

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • Configuring vsftpd with nginx on Ubuntu 12.04 LTS

    - by arby
    I've attempted to configure a nginx / vsftpd server on Ubuntu 12.04 LTS (via amazon ec2) a couple times now, but I seem to keep making a mistake along the way. Currently, when I try to connect to my ftp server it takes a minute or so before it connects. Then when I issue a command, they all timeout with an operation failed error. Aside from these issues, I'm not completely confident with the file ownership & permissions or the configuration / settings. So, I think it's best if I just re-install and re-configure correctly. I believe the nginx installation comes with a default user of www-data:www-data and web root directory ownership by root:root. Vsftpd, however, needs to have a user created with the same group as the nginx user (www-data), and the same home directory as the nginx server (/usr/share/nginx/www), with g+w chmod permissions granted on that directory. The vsftpd.conf file should disable anonymous logins and enable local logins, file writing, and chroot local users. In my previous config, I had /bin/false set for the ftp user's shell and pam_shells.so disabled. I also had local_umask set to 0027. So, starting with a fresh ec2 instance, I've got: sudo apt-get install vsftpd sudo apt-get install nginx For the firewall I issued the command (not sure if necessary): sudo ufw allow ftp Which commands / config is recommended from here? I only need 1 ftp user that I can use to login with my ftp client to modify the single nginx web domain, which will need php & sql for WordPress.

    Read the article

  • Expected output from an RM-1501 RS232 interface?

    - by Jon Cage
    I have an old RM-1501 digital tachometer which I'm using to try to identify the speed of an object. According to the manual I should be able to read the data over a serial link. Unfortunately, I don't appear to be able to get any sensible output from the device (never gives a valid speed). I think it might be a signalling problem because disconnecting the CTS line starts to get some data through.. Has anyone ever developed anything for one of these / had any success?

    Read the article

  • Need clear steps on how to convert a Windows 2000 Server to a XenServer VM

    - by Jay
    The source system is not local. The target host running XenServer is not local. The source system is running Windows 2000 Server SP4 and has 1 disk split into 6 partitions, all NTFS: C: 6 GB (boot) D: 15 GB E: 6 GB F: 6 GB G: 5 GB H: 26 GB Most of the partitions are mostly mostly full ( 60%). What is the most straightforward way to do a P2V migration of the server? I can do minor database & data syncs after the P2V is successful & running as a VM within XenServer, it's just getting to that point which is not clear. The option of installing a Windows 2000 Server from scratch is not available, I need to convert the existing physical server as-is into a VM to be hosted within a XenServer environment. I've looked at XenConvert but it maxes out on converting only 4 partitions in one shot, and I'm not certain how to account for the 2 extra partitions. I'm not familiar with XenServer but it's my only option right now to go P2V.

    Read the article

  • error: no matching function for call to ‘BSTreeNode<int, int>::BSTreeNode(int, int, NULL, NULL)’ - what's wrong?

    - by Alexander Suraphel
    error: no matching function for call to ‘BSTreeNode::BSTreeNode(int, int, NULL, NULL)’ candidates are: BSTreeNode::BSTreeNode(KF, DT&, BSTreeNode*, BSTreeNode*) [with KF = int, DT = int] here is how I used it: BSTreeNode<int, int> newNode(5,9, NULL, NULL) ; I defined it as follows: BSTreeNode(KF sKey, DT &data, BSTreeNode *lt, BSTreeNode *rt):key(sKey),dataItem(data), left(lt), right(rt){} what's wrong with using my constructor this way? i've been pulling out my hair all night please help me ASAP!!

    Read the article

  • [Silverlight] DataGrid

    - by nCdy
    I'm making tutorials. Silverlight + MSSQL And I'm on the last step when it says "Copy-paste my code and tada it will works"... :-/ But after I added System.Windows.Controls.Data reference it still can't find Error 3 The type or namespace name 'Data' does not exist in the namespace 'System.Windows.Controls' (are you missing an assembly reference?) I really don't miss the reference ... Maybe I need to add it somehow else or ... I really have no idea. (VSWDEE2010)

    Read the article

  • Django forms "not" using forms from models

    - by zubinmehta
    I have a form generated from various models and the various values filled go and sit in some other table. Hence, in this case I haven't used the inbuilt Django forms(i.e. I am not creating forms from models ). Now the data which is posted from the self made form is handled by view1 which should clean the data accordingly. How do I go about it and use the various functions clean and define validation errors (and preferably not do validation logic in the view itself!)

    Read the article

  • Authenticate users with Zimbra LDAP Server from other CentOS clients

    - by efesaid
    I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ? My zimbra server version is [zimbra@zimbra ~]$ zmcontrol -v Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition. My LDAP Server status is [zimbra@ldap ~]$ zmcontrol status Host ldap.domain.com ldap Running snmp Running stats Running zmconfigd Running I already installed nss-pam-ldapd packages to my servers. [root@www]# rpm -qa | grep ldap nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64 apr-util-ldap-1.3.9-3.el6_0.1.x86_64 pam_ldap-185-11.el6.x86_64 openldap-2.4.23-32.el6_4.1.x86_64 My /etc/nslcd.conf is [root@www]# tail -n 7 /etc/nslcd.conf uid nslcd gid ldap # This comment prevents repeated auto-migration of settings. uri ldap://ldap.domain.com base dc=domain,dc=com binddn uid=zimbra,cn=admins,cn=zimbra bindpw **pass** ssl no tls_cacertdir /etc/openldap/cacerts When i run [root@www ~]# id username id: username: No such user But i am sure that username user exist on ldap server. EDIT : When i run ldapsearch command i got all result with credentials and dn. [root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*' # extended LDIF # # LDAPv3 # base <dc=domain,dc=com> (default) with scope subtree # filter: objectclass=* # requesting: ALL # # domain.com dn: dc=domain,dc=com zimbraDomainType: local zimbraDomainStatus: active . . .

    Read the article

  • How does QuickBooks handle IIF imports?

    - by dwwilson66
    I've received a 'template' for an IIF file for Quickbooks transactions, and there's like seventy-bazillion fields in there, lots of which I never even user. It's a tab delimited file, with the following lines--field headers for transactions and respective splits for those transactions, followed by an end-of-transaction marker. !TRNS FIELD1 FIELD2 FIELD3 ... FIELD48 !SPL FIELD1 FIELD2 FIELD3 ... FIELD48 !ENDTRNS TRNS FIELD1_DATA FIELD2_DATA FIELD3_DATA ... FIELD48_DATA SPL FIELD1_DATA FIELD2_DATA FIELD3_DATA ... FIELD48_DATA ENDTRNS ... What drives data to a particular field? Is it the field header with corresponding data, or is it the tabular position relative to the head of the line? E.G., Let's say all I have to import is the data in FIELD1, FIELD3 and FIELD5: Would I need by header: !TRNS FIELD1 FIELD3 FIELD5 !SPL FIELD1 FIELD3 FIELD5 !ENDTRNS TRNS FIELD1 FIELD3 FIELD5 SPL FIELD1 FIELD3 FIELD5 ENDTRNS or by tabular position: !TRNS FIELD1 FIELD2 FIELD3 FIELD4 FIELD5 !SPL FIELD1 FIELD2 FIELD3 FIELD4 FIELD5 !ENDTRNS TRNS FIELD1_DATA FIELD2_BLANK FIELD3_DATA FIELD4_BLANK FIELD5_DATA SPL FIELD1_DATA FIELD2_BLANK FIELD3_DATA FIELD4_BLANK FIELD5_DATA ENDTRNS Alternately, if it were a comma delimited input, would I need: DATA1,DATA3,DATA5 or DATA1,,DATA3,,DATA5 Anyone have any experience with what Quickbooks is doing?

    Read the article

  • Is redis a durable datastore?

    - by allyourcode
    By "durable" I mean, the server can crash at any time, and as long as the disk remains in tact, no data is lost (see ACID). Seems like that's what journaling mode is for, but if you enable journaling, doesn't that defeat the purpose of operating on in-memory data? Read operations might not be affected by journaling, but it seems like journaling would kill your write performance.

    Read the article

  • oci_connect Blank Page in PHP

    - by Bryan
    UPDATE (3/8/2010) I installed Xdebug and have it tracing my code. This is the output I am getting: TRACE START [2010-03-08 17:53:05] 0.2090 327864 -> {main}() /data/aims3/http/octest.php:0 0.2091 327988 -> ini_set(string(14), string(1)) /data/aims3/http/octest.php:3 0.2093 327920 -> error_reporting(long) /data/aims3/http/octest.php:4 0.2094 328048 -> oci_connect(string(8), string(8), string(25)) /data/aims3/http/octest.php:6 The trace halts at that point. I have installed everything the same way on a local server and it works fine. To say I am at a complete loss would be putting it lightly. *NOTE: I ran make test and it returned FAIL on every test. I never ran this on my working machine to see if it reports the same errors. Any idea why make test would report FAIL but make doesn't report any error? I've installed the Oracle Instantclient with no reported errors along with the OCI8 PECL package and at a loss. Whenever I try to open a connection with oci_connect, it halts my entire PHP script. EXAMPLE: <?php ini_set ("display_errors", "1"); error_reporting(E_ALL); echo "before"; $conn = oci_connect("username", "password", "host"); echo "after"; ?> Returns a complete blank page. The module is loaded (seen in phpinfo) and everything installed with no errors. I am at a complete loss. CentOS: 5.4 Apache: 2.2.3 PHP: 5.3.1 InstantClient: 11.2 oci8: 1.4.1 Any thoughts? NOTES Apache Error Log reports nothing Attempted Debugging: 1: <?php ini_set ("display_errors", "1"); error_reporting(E_ALL); echo "before"; if(!function_exists('oci_connect')) die('Oracle Not Installed'); echo "after"; ?> Returns: beforeafter 2: Changing host to //host Returns: Same error

    Read the article

  • Drobo FS vs. MacBook Pro: Finder works, Drobo Dashboard doesn't

    - by dash-tom-bang
    Does anyone have any experience with the new Drobo FS, specifically using it from a MacBook Pro? My experience thus far is this: Set up the Drobo Dashboard software (hereinafter called simply 'Dashboard') on my WinXP machine, which is hard-wired to the network to do the data migration from my NAS-that's-being-replaced (a 250G SimpleShare which works well enough but I was always afraid of losing the one disk). The Dashboard seems to work ok, except that the DroboCopy function doesn't work at all. This is the backup solution, which I can configure, and if I launch it (e.g. to back up from the old NAS to the Drobo) it spins the NAS, seeking the drive all over hell and creation, until finally giving up an hour+ later with zero files copied. Selecting only a subset of the data yields the same effect albeit more quickly. On my Mac I installed the Dashboard software too, since most of my fiddling with the device will be from my couch in the living room. Finder connects to the box just fine, fwiw, but Dashboard just sits there, "waiting for connection." This is considerably more bothersome than the above paragraph, but I figured I'd give whatever information I have. Drobo is insisting that I send them this "Debug Log" file that their software generates. Does anyone know what's in it? It's encrypted and they won't tell me, which spooks me just a bit; not like I'm terribly concerned about privacy but I don't want to be sending personal information out to every clown who says they "need" it in order to help me. thanks a ton, -tom!

    Read the article

  • Is it possible to do custom grouping in the ASP.NET ListView control?

    - by michielvoo
    You can only define a GroupItemCount in the ListView, but what if you want to do grouping based on a property of the items in the data source? Sort of an ad-hoc group by. The data source is sorted on this property. I have seen some examples where some markup in the ItemTemplate was conditionally show, but I want to leverage the GroupTemplate if possible. Is this possible?

    Read the article

  • mysqld causes high CPU load

    - by Radu
    My mysqld goes to use 99.9% of CPU for variable time (between 2 - 20 minutes), and then goes back to normal 0.1% - 5%. Checked processlist: all is normal, 1 to 20 inserts or updates that last 2 to 5 sec, and about 20 process that are in Sleep Mode (maybe because the scripts don't close the mysql connection, but are they are closed in about 5 - 10 secs, I didn't make the scripts :P but the server was running fine the last 2 years, since is was made): | 15375 | root | localhost | stoc | Query | 0 | NULL | show processlist | | 79480 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79481 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79482 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79483 | pppoe | localhost | pppoe | Query | 0 | init | UPDATE acc SET InputOctets="0", OutputOctets="0", InputPackets="unknown", OutputPackets="User | | 79484 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79485 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79486 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL Checked raid, seemns OK: [root@db2]# cat /proc/mdstat Personalities : [raid5] [raid4] [raid1] md0 : active raid1 sdd1[3] sdc1[2] sdb1[0] sda1[1] 136448 blocks [4/4] [UUUU] md1 : active raid5 sdd2[3] sdc2[2] sdb2[0] sda2[1] 12023808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md3 : active raid5 sda4[1] sdd4[3] sdc4[2] sdb4[0] 203647488 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sda3[1] sdd3[3] sdc3[2] sdb3[0] 24024576 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@db2]# top sees my mysqld cpu load, but nothing else seems to be wrong: [root@db2]# top top - 17:56:05 up 7 days, 3:55, 3 users, load average: 32.93, 24.72, 22.70 Tasks: 75 total, 4 running, 71 sleeping, 0 stopped, 0 zombie Cpu(s): 63.4% us, 36.6% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Mem: 1988824k total, 1304776k used, 684048k free, 99588k buffers Swap: 12023800k total, 0k used, 12023800k free, 951028k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5754 mysql 19 0 236m 57m 5108 R 99.9 2.9 21:58.76 mysqld 1 root 16 0 7216 700 580 S 0.0 0.0 0:00.39 init 2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 Repaired all mysql databases, reindexed raid ... I'm running out of ideeas ... Anyone has an ideea what can go wrong with this server ? Thank you

    Read the article

  • dojo/dijit ContentPane setting content

    - by Kitson
    I am trying append some XML retrieved via a dojo.XHRGet to a dijit.layout.ContentPane. Everything works ok in Firefox (3.6) but in Chrome, I only get back 'undefined' in the particular ContentPane. My code looks something like this: var cp = dijit.byId("mapDetailsPane"); cp.destroyDescendants(); // there are some existing Widgets/content I want to clear // and replace with the new content var xhrData = { url : "getsomexml.php", handleAs: "xml", preventCache: true, failOk: true }; var deferred = new dojo.xhrGet(xhrData); deferred.addCallback(function(data) { console.log(data.firstChild); // get a DOM object in both Firebug // and Chrome Dev Tools cp.attr("content",data.firstChild); // get the XML appended to the doc in Firefox, // but "undefined" in Chrome }); Because in both browsers I get back a valid Document object I know XHRGet is working fine, but there seems to be some sort of difference in how the content is being set. Is there a better way to handle the return data from the request? There was a request to see my XML, so here is part of it... <?xml version="1.0" encoding="UTF-8" standalone="no"?> <svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="672" height="1674"> <defs> <style type="text/css"> <![CDATA[ ...bunch of CSS... ]]> </style> <marker refX="0" refY="0" orient="auto" id="A00End" style="overflow: visible;"> ...bunch more defs... </defs> <g id="endpoints"> ...bunch of SVG with a some... <a xlink:href="javascript:gotoLogLine(16423,55);" xlink:type="simple">...more svg...</a> </g> </svg> I have run the output XML trough the WC3 validator for XML to verify it is valid. Like I said before, works in FireFox 3.6. I tried it on Safari and I got the same "undefined" so it seems to be related to Webkit.

    Read the article

  • Queryable and BindingSource item add .net

    - by Alexander
    Hello! I wrote my own simple Queryable provider which retrieves data from a database. Now I need to bind this data to a BindingSource and when the new item is added to BindingSource some event or method must be called for handling the add operation. I have tried to implement IBingingList on a List class which is returned when the query is completed, but the AddNew method is not called when the item is added? How can I implement this scenario?

    Read the article

  • Business Intelligence Development Studio encounter error on Rendering

    - by user366796
    I try to develop a data processing extension for SSRS 2008 in order to access database through an entity framework. But when I copy and register the extension in BI Development Studio, it gives me an error message while loading the extension". I built it with targeting framework 4.0 by using Visual Studio 2010 because my data model class library was built with that version of target framework. Any help will be greatly appreciated. Thanks.

    Read the article

< Previous Page | 910 911 912 913 914 915 916 917 918 919 920 921  | Next Page >