Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 644/916 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • File corruption after copying files in Windows 7 64 bit using two methods

    - by DustByte
    I have 5000 pictures and other files in a directory taking up 35 GB. I want to duplicate this directory. Method 1: I do a simple copy and paste of the directory in explorer. I have the habit of checking the checksums after copying important files. In this case I noticed that around 2000 files failed the MD5 test. At a closer inspection of a randomly chosen JPEG with different checksums it turns out that some XMP metadata had changed. In particular, the tag <MicrosoftPhoto:DateAcquired> had changed the date from 2009 to today (possibly around the time I was copying the files). I have no idea what triggered this XMP data to be changed and exactly when it was changed and why for these particular files, but at least it seems to explain the checksum discrepancy. Method 2: As I want the exact files to be duplicated, I tried the program FreeFileSync to mirror the directory, hoping no XMP metadata would mysteriously change. A checksum test in addition to a thorough file comparison test in FreeFileSync lead to two similar but yet different results: 31 files fail the checksum test, 23 files fail the file comparison test. The smaller set is not entirely contained in the bigger set, although many files occur in both. What is alarming here is that not only JPEGs are flagged as altered but also som AVIs, MPGs and a large 7-zip file. Closer inspection of a JPEG indicates that it is indeed corrupt: the bottom half of the picture is simply plain gray. Due to the size of the 7-zip file, I have not been able to pin down the discrepancy. Note, in both methods, every file has its correct file size after being copied. Question: Any thoughts on what is possibly going on here? I have never had this problem before, and I am now terrified that files get corrupted after simple actions like copy/paste and file sync. Even if I manage to successfully copy the files somehow, I would still like an explanation to this.

    Read the article

  • Copying SD Cards with "LaCie d2 Network"

    - by rjstelling
    The LaCie d2 Network has a feature where by you can attach a USB drive and press the blue button at the from and it will copy the drive contents (no computer required). (See this review for more info). USB 2.0 and eSATA ports are also provided but these are not designed for extending the d2 Network's storage. Rather, they allow you to connect portable drives for uploading their data to the d2 Network directly. The process is quite slick, too: just plug in a drive and press the big blue button on the front of the unit to trigger an immediate upload. This copies over everything on the external device and seems ideal for camera use. Is it possible to use an microSD or SD card adapter (like the Kingston MobileLite 9-in-1) and copy the contents of the card? I'm assuming the card reader just "looks like" a normal USB flash drive the computer or (in this case) LaCie d2 Network. Is this assumption correct? Do you know any reason why this won't work?

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Win 2008 R2 terminal server and redirected printer queue security

    - by Ian
    I have a case where I need a non-priv account to be able to make a modification to the redirected printer. I know, its not advisable but we're not giving them access - changes will be made in code. So, following the docs (http://technet.microsoft.com/en-us/library/ee524015(WS.10).aspx) I modified the default security for new printer queues. This doesnt work though as windows doesn't seem to assign the privs you configure in the printer admin tool to redirected printer queues. As I test I added a non-priv test user to the default security tab in the printer admin tool (control panel - admin tools - printer admin. I assigned it all privs (its a test) and logged the user into the terminal server. The redirected printers duely appeared as usual. However if I open the printer properties - security tab, the user appears in the list of accounts/groups but the options I selected (all privs) are not set. Instead the user special privs box is marked and when I click on 'advanced options' and view them, there is nothing marked. So, something is clearing these options.... the question is, why and how can I convince it not to? Ian

    Read the article

  • How can I setup BluePill to Monitor a Rails App Running via Passenger (mod_rails)

    - by Jim Jeffers
    I recently launched a site running phusion passenger. Unfortunately, the site went down due to a frozen thread. I was able to save the server by doing kill -9 to the specific PID. Still though, I thought passenger was able to manage this automatically. I have a server with 1GB of memory running one rails app with passenger allotted up to 7 instances. However, when I came to discover the site went down I found that passenger had spawned 6 instances with one of them using up over 800mb of memory causing the server to swap. As a result I am hoping to setup something like bluepill on the server but I'm slightly confused as to how you go about doing it. Mainly because bluepill expects to start/stop the processes it's monitoring. However, in our case, passenger already restarts processes for us so we only need to monitor the pids of passengers instances and kill them once they've gotten too large. Has anyone here setup BluePill to monitor a rails app running under phusion's passenger? Any insight would be useful.

    Read the article

  • Disable IPv6 on Debian VPS (Virtuozzo!)

    - by chris_l
    I have a Debian Lenny VPS, that's running virtualized by Parallels/Virtuozzo. Currently, the network interface doesn't have an IPv6 address - and that's good, because I don't have an ip6tables configuration. But I assume, that I could wake up one day, and ifconfig will show me an ipv6 address for the interface - because I have no control over the kernel or its modules - they're under the control of the hosting company. That would leave the server completely vulnerable to attacks from IPv6 addresses. What would be the best way to disable IPv6 (for the interface or maybe for the entire host)? Usually I would simply disable the kernel module, but that's not possible in this case. Update Maybe I should add, that I can use iptables and everything normally (I'm root on the VPS), but I can't make changes to the kernel or load kernel modules because of the way Virtuozzo works (shared kernel). lsmod always returns nothing. I can't call ip6tables -L (it says that I need to insmod, or that the kernel would have to be upgraded). I don't think, that changes to /etc/modprobe.d/aliases would have any effect, or do they? Networking Config? I thought, that maybe I can turn IPv6 off from /etc/network/... Is that possible? I just see, that they've set up avahi, so I should probably change the setting use-ipv6=yes to "no" in /etc/avahi/avahi.conf (?) Has anybody already tried this solution, and can I rely on it? I don't know too much about avahi. Would it actually have any effect? Or could it even bring my entire interface down, once IPv6 is enabled by the kernel?

    Read the article

  • Swapping Function (Fn) and Control (Ctrl) Keys on Lenovo ThinkPad W500

    - by Howiecamp
    I'd like to swap the Fn and Ctrl keys on my ThinkPad W500 (like many others!). I wanted to comment on http://superuser.com/questions/35228/how-can-i-switch-the-function-and-control-keys-on-my-laptop and StackOverflow question 514781 (please Google it because I don't have enough rep to include 2 hyperlinks) but I don't have enough rep to do so to comment. Numerous folks (in both the above questions and on other Google searches) indicate that Windows doesn't register the Fn key as a keypress but using a tool that gives the ASCII value of a keypress (visit www mihov com / eng / am.html) I see the Fn key returning FF (perhaps FF in this case means 'not registered'). I also see that keys like Ctrl register with one ASCII code when pressed alone and another when pressed in combo with another key. Fn will only register when pressed alone, so Windows definitely isn't seeing the combo. This took a solution like AutoHotKey off the table. I ran KeyTweak (which shows you the hardware scan codes of a keypress and the Fn key registerd as 57443). Using this program I remapped Fn to the Ctrl key; this worked perfectly. However, I suspect that because of the issue in #1, the combo of, for example, Fn + C did not execute a copy. Short of retraining my pinky I'm actually considering removing the keyboard and resoldering the connections to swap those keys. I'd love to get some input as to the root technical issue(s) and possible solutions here. Thanks

    Read the article

  • Subdomains and address bar

    - by Priednis
    I have a fairly noob question about how subdomains work. As I understand at first the DNS server specifies that a request for certain subdomain.domain.com has to go to the IP address of domain.com, and the webserver at domain.com further processes the request and displays the needed subdomain page. It is not entirely clear to me how (for example Apache) server does it. As I understand there can be entries in vhosts.conf file which specify folders that contain the subdomain data. Something like: <VirtualHost *> ServerName www.domain.com DocumentRoot /home/httpd/htdocs/ </VirtualHost> <VirtualHost *> ServerName subdomain.domain.com DocumentRoot /home/httpd/htdocs/subdomain/ </VirtualHost> and there also can be redirect entries in .htaccess files like rewritecond %{http_host} ^subdomain.domain.com [nc] rewriterule ^(.*)$ http://www.domain.com/subdomain/ [r=301,nc] however in this case the user gets directed to the directory which contains the subdomain data but the user gets "out" of the subdomain. I would like to know - how, when going to subdomain.domain.com the subdomain.domain.com, beginning of address remains visible in the address bar of the explorer? Can it be done by an alternate entry in .htaccess file? If a VirtualHost entry is specified in the vhosts.conf file, does it mean, that a new user account has to be specified for access to this directory?

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • Referencing groups/classes from Puppet dashboard in my site manifest

    - by Banjer
    I'm using Puppet Dashboard as my ENC and I'm not sure how to reference or use class and group classifications from /etc/puppet/manifests/site.pp. I have two groups defined in the dashboard: CentOS6 and SLES11. What should my site.pp look like if I want to include a certain list of modules in the CentOS6 group and a certain list of modules in the SLES11 group? I'm trying to do something like this: # /etc/puppet/manifests/site.pp node basenode { include hosts include ssh::server include ssh::client include authentication include sudo include syslog include mail } node 'CentOS6' inherits basenode { include profile } node 'SLES11' inherits basenode { include usrmounts } I have OS-specific case statements within my modules, but there are some modules that will only be applied to a certain distro. So I suppose I have two questions: Is this the best way to apply modules/resources in an OS-specific manner? Or does the above make you want to vomit? Regardless of #1, I'm still curious as how to reference classes, groups, and nodes from Dashboard within my manifests. I've read the External Nodes doc, but I'm not seeing how they correspond to manifests. Thanks all.

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by dotnetdev
    I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Amazon VPC NAT not working

    - by rpkelly
    I'm trying to create a NAT instance for my VPC to allow instances on private subnets connect to the internet (most importantly, S3). I tried following the instructions here: http://docs.amazonwebservices.com/AmazonVPC/2011-07-15/UserGuide/index.html?VPC_NAT_Instance.html . Unfortunately, the instances in the private subnet (call it 10.10.2.0/24) cannot reach the internet. I have done the following: Create a NAT instance (Amazon's ami-vpc-nat-1.0.0-beta.i386-ebs (ami-d8699bb1)) in public subnet (call it 10.10.1.0/24). Changed "Source / Dest Check" to disabled. Created a new entry in the default routing table (which is used by 10.10.2.0/24) and had it point to the ID of the newly created instance. Associated an Elastic IP address with the NAT instance. Allowed all outbound traffic on the security group of the NAT instance. Ensured that all traffic could pass between the two subnets. I've tried also doing this with an existing instance using iptables, but had no luck. And I have verified that sys.net.ipv4.ip_forward is 1, just in case anyone was wondering. And I still have no internet connectivity from the instances on 10.10.2.0/24. Does anyone have any suggestions?

    Read the article

  • BTrFS crashhhh?

    - by bumbling fool
    I create a new BTrFS raid10 file system using two 250GB drives and the second partition on a third 80GB drive. I create a subvol and snapshot. I mount the snapshot and start copying 8GB of data to it. It gets to around 1GB and the Desktop disappears and what looks like a non interactive terminal comes up with dump/crash information. I don't have a camera handy or I'd take a picture and post it. It basically looks like stack trace info. CTRL-ALT F7 will eventually bring back the Desktop though but the entire BTrFS portion of the OS is hung and non responsive until I reboot. I've reformated and reproduced this problem 3 times now and I'm about to give up :( I realize it is possible this problem is not entirely BTrFS' fault because I'm on natty which is still alpha. More granular details in case I'm an idiot: 1) Create FS: sudo mkfs.btrfs -m raid10 -d raid10 /dev/sda2 /dev/sdb /dev/sdc 2) Initial temporary mount: mkdir /btrfs && sudo mount -t btrfs /dev/sda2 /btrfs 3) Create subvol btrfs s c /btrfs/vm 4) Create initial snapshot: (optional) btrfs s sn /btrfs/cantremember.snap.something 5)unmount /btrfs and mount /btrfs/vm sudo mount -t btrfs -o subvol=vm /dev/sda2 /btrfs/vm 6) Copy data to subvolume. 7) Balance data across drives: (optional) btrfs f bal <path> (never get to this step 7...) Am I doing something wrong?

    Read the article

  • Update git on mac

    - by Meltemi
    I can't remember how I installed git a while back....but now it's living in /usr/bin/git and needs to be updated. I don't care how (pre-compiled or build my own) but what I don't want is another version existing somewhere else. i vaguely remember curl(ing) down the source & compiling it. but not positive. anyway, what's the easiest way to keep Git up-to-date under Mac OS X? Side question: I'm not that familiar with git. once it's installed is it ENTIRELY contained within its directory? so, in my case, everything about git on my machine (excluding the actual code repositories of course) is in /usr/bin/git/ ? If so then can I just move git around with a simple mv -R /usr/bin/git /opt/git? Then update my $PATH and everything should work as before? if so then i supposed i could just install again by any method and to any directory...and then move the new one into /usr/bin replacing the old version?!? Or is this bad?

    Read the article

  • Ubuntu root privs installation issue

    - by Pam
    I am a fairly new Ubuntu user (and Linux user, for that matter) and I just downloaded a program whose installer was a .sh file. Not thinking, I copied the installer to an /opt subdirectory, thinking that I was going to install the application there: sudo cp ~/Downloads/fooInstaller.sh /opt/someDir I can't remember, but I either had to use sudo because /opt required it, or I just used it without thinking, but in any case, I prefixed with sudo. Once in /opt/someDir, I executed the installer again, using sudo: sudo sh fooInstaller.sh The terminal went crazy, and a few seconds later, a graphical install wizard popped up that guided me through the rest of the process. At the end of the wizard I was prompted to launch the program, and I did, and everything was great. Until... I closed the program, and attempted to add it to my Ubuntu "panel" (the icon panel at the top of the screen). The program was installed to /usr/local/foo/theProgram, and so I specified that URL as the command in the custom app launcher. When I open the program through the panel/launcher (at the top of the screen), the program doesn't load or operate correctly. I get a lot of error messages complaining about being denied permissions. I'm assuming that this is a "superuser/installation/privs" issue, and not a problem with the application (hence this post at superuser.com instead of the application's forums), because when I launch the program from the terminal with sudo, it opens and executes perfectly fine, just like it did the first time around after the install wizard finished. I realize I'm probably going to have to uninstall the program completely, and re-install it differently. Finally, my question: After uninstalling, can I avoid all these issue by just running the installer (sh fooInstaller.sh) right out of my Downloads directory, sans the sudo prefix? If not, how do I get the program to install without root privs so that I can add it to my panel/launcher and get it executing correctly? Sorry for the long post but I didn't want to omit any details because, as I'm sure you can tell, I'm not really sure I know what I'm doing. Thanks for any help here!

    Read the article

  • How to copy a bunch of pages? Is there a 3rd party tool?

    - by unknown (yahoo)
    (I asked the following question at the DNN forum, and also at snowcovered. Nobody knew of such an obvious time-saver being for sale. I'm posting here in case anybody knows of a freeware module that might do this.) By "groups of dnn pages", I mean pages that form a hierarchy (not necessary a hierarchy that is headed with a page at the same level as the Home page.) I know that I can copy web pages, one by one, using the admin login via the web-based dnn interface. But, I'd prefer a script or wizard, of some sort (that runs scripts behind the scenes) that can allow me to 1) specify a web page that I want to copy (along with the hierarchy of pages under it) 2) specify the names and titles of the new top-level pages 3) specify whether the contained modules of the top-level page that I want to copy is to be : ( ) New ( ) Copy ( ) Reference (as in the web-based interface) 4) repeat 3) for each of the source pages in the hierarchy that I want to copy You might say that I am looking to do something similar to creating a portal web site based on a template, except that it's not an entirely new website - instead it's a section of the current web site. I might want to do this because I have an organization which is broken into chapters, and I want each chapter to have, say, it's own General Information page (which acts like it's home page), and underneath that, in it's hierarchy, a Contact Info page and an Events page. so: Home Page   General Information Page     Contact Info     Events -- Home Page   General Information Page     Contact Info     Events   General Information Page Kiwanis - Bloomfield     Contact Info     Events   General Information Page Kiwanis - Dayton     Contact Info     Events If I have 200 chapters, I certainly don't want to copy those 3 web pages using the web based interface, as that would take a long time. (And imagine if each chapter's new sub-website had 30 pages!) I just want to specify the parameters of a copy process, then press a button, and let the system do the rest.

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • How to set up Zabbix to monitor SQL Server Failover Active-Passive Cluster?

    - by Sebastian Zaklada
    It should be simple, so it is just most likely my approach being totally off and someone will hopefully prod me into the right direction. We have a Zabbix 2.0.3 server instance set up monitoring a bunch of different servers, but now we need to set it up to monitor and notify any alerts in regards to the SQL Server 2008 R2 Failover Active-Passive cluster. Essentially, this is a 2 servers cluster, when only one of its nodes can be "active" at a given time, serving all SQL Server related requests, while the other server just "sleeps" and from the point of anyone logged on on that server - has all of the SQL Server related services in stopped state. We have tried setting up Zabbix agents on both servers, using SQL Server 2005 templates (we could not find any 2008 specific ones and the 2005 ones always seemed to be working just fine for monitoring 2008 R2 instances) and configuring Zabbix server for both of the servers, but we end up having constant alerts for the server being currently the passive one in the cluster. We have been able to look up various methods of actually monitoring the failover, but we have not been able to find any guidance in regards to how to instruct Zabbix, that in this particular case, only one of the servers in the group is expected to be in the online state, while the other can be just discarded and should not raise any alerts. I hope I made myself clear. Thanks for any guidance. I am out of ideas.

    Read the article

  • Best way to 'harden' embedded ext4 file server against unexpected loss of power?

    - by Jeremy Friesner
    Hi all, First, a little background: my company makes an audio streaming device that is a headless, rack-mounted Linux box with a couple of SSDs attached. Each SSD is formatted with ext4. The users can connect to the system using Samba/CIFS to upload new audio files or access existing ones. There is also custom software for streaming out audio over the network. This is all fine. The only problem is that the users are audio people, not computer people, and see the system as a 'black box', not as a computer. Which means that at the end of the day, they aren't going to ssh in to the box and enter "/sbin/shutdown -h"; they are just going to cut power to the rack and leave, and expect things to still work properly the next day. Since ext4 has journalling, journal checksumming, etc, this mostly works. The only time it doesn't work is when someone uploads a new file via Samba and then cuts power to the system before the uploaded data has been fully flushed to the disk. In that case, they come in the next day and find that their new file has been truncated or is missing entirely, and are unhappy. My question is, what is the best way to avoid this problem? Is there a way to get smbd to call "sync" at the end of every upload? (Performance on uploads isn't so important, since they only happen occasionally). Or is there a way to tell ext4 to automatically flush within a few seconds of any change to a file? (Again, performance can be sacrificed for safety here) Should I set a particular write-ordering mode, activate barriers, etc?

    Read the article

  • Monitoring / metric collection for system collectives that change a lot in time (a.k.a. cloud)

    - by Florin Andrei
    When your server fleet doesn't change a lot in time, like when you're using bare-metal hosting, classic monitoring and metric collection solutions (Nagios, Munin) work well. But if the number of systems varies a lot in time, and may in fact vary rapidly, classic software is more difficult to setup and use. E.g., trying to make Nagios (monitoring) keep up with a rapidly evolving cloud infrastructure can be cumbersome. Same for Munin (metric collection). It's not just the configuration, but the way the information is conveyed to the user, or displayed, is inadequate for the cloud. What are some possible alternatives that work well with the cloud? The goals are to collect and display metrics (analog to Munin), and generate alerts when certain metrics go out of bounds or when certain services are unavailable (analog to Nagios), and do everything in a cloud-friendly manner. Some cloud providers offer monitoring / metric collection as services, but not always, and if you use more than one provider you don't want to become too dependent of just one vendor. So provider-independent solutions are required. EDIT: I am asking this question in a general fashion - not limited to any given cloud infrastructure (like OpenStack), but in the general case of using arbitrary cloud providers.

    Read the article

  • How to read an Excel file, get and set the information using POI

    - by user1399713
    I'm using Java to read a form that is in an Excel spreadsheet that the user fills in with information about geometric shape. Ex: Shape :_________ Color :_________ Area: _________ Perimeter:________ So far the code I have can I can read what I want in the form and print out the values of Shape, Color, Area, Perimeter. public class RangeSetter { /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { FileInputStream file = new FileInputStream(new File("test2.xls")); //C:\Users\Yo\Documents // Setup code String cname = "Shape"; HSSFWorkbook wb = new HSSFWorkbook(file); // retrieve workbook // Retrieve the named range // Will be something like "$C$10,$D$12:$D$14"; int namedCellIdx = wb.getNameIndex(cname); Name aNamedCell = wb.getNameAt(namedCellIdx); // Retrieve the cell at the named range and test its contents // Will get back one AreaReference for C10, and // another for D12 to D14 AreaReference[] arefs = AreaReference.generateContiguous(aNamedCell.getRefersToFormula()); for (int i=0; i<arefs.length; i++) { // Only get the corners of the Area // (use arefs[i].getAllReferencedCells() to get all cells) CellReference[] crefs = arefs[i].getAllReferencedCells(); for (int j=0; j<crefs.length; j++) { // Check it turns into real stuff Sheet s = wb.getSheet(crefs[j].getSheetName()); Row r = s.getRow(crefs[j].getRow()); Cell c = r.getCell(crefs[j].getCol()); if (c!= null ){ switch(c.getCellType()){ case Cell.CELL_TYPE_STRING: System.out.println(c.getStringCellValue()); } } } } What I want to do is to create a method that gets the that information and another that sets it. So far I can only print to the console

    Read the article

  • How do I find information about a particular trojan? "W32/Smalltroj.XVGT", as reported by Norman

    - by Lasse V. Karlsen
    I tried checking the Norman antivirus page, Virus-descriptions, but sadly it seems Norman has intentionally obfuscated their search results (I tried clicking on W, and it seems they just list viruses with a W somewhere in the description, instead of more typical, all viruses with a name starting with a W.) Is there a common virus-list somewhere, or is it as I suspect, every antivirus manufacturer is free to come up with their own identification tags for each virus? Several "vshost32.exe" files, related to Microsoft Visual Studio 2008, has been quarantined on our server today, probably related to a test-deployment of some internal software. Some developer machines that have grabbed that latest version of our program has also had the same files quarantined. Now, these files should not have been deployed in the first case, so I'll be looking into that, but whenever any developer now builds a program locally and attempts to debug, the same file is placed in the build output directory, and promptly quarantined. Does anyone have any clues as to how I can go about verifying this before I pointedly ask the antivirus software to go take a hike on this particular virus? Edit: I've copied one of the quarantined files manually to a machine over the network that doesn't have antivirus installed, and compared the file on that machine with a local copy (on that machine) of the vshost32.exe template file, and they're bit-for-bit identical. I guess this is a false positive. I still would like to know if it would be possible for me to verify this in any other way though, since next time such a trojan might be reported in a compiled file that we won't have a pristine copy of.

    Read the article

  • samba "username map" stopped to work after upgrade to 3.6

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems.

    Read the article

  • Apache: rewrite port 80 and 443 - multiple SSL vhosts setup

    - by Benjamin Jung
    SETUP: multiple SSL domains are configured on a single IP, by using vhosts with different port numbers (on which Apache listens) Apache 2.2.8 on Windows 2003 (no comments on this pls) too many Windows XP users so SNI isn't an option yet There may be reasons why it's wrong to use this approach, but it works for now. vhosts setup: # secure domain 1 <VirtualHost IP:443> SSL stuff specifying certificate etc. ServerName domain1.org </VirtualHost> # secure domain 2 <VirtualHost IP:81> SSL stuff for domain2.org ServerName domain2.org </VirtualHost> GOAL: Some folders inside the domain2.org docroot need to be secure. I used a .htaccess file to rewrite the URL to https on port 81: RewriteEngine On RewriteCond %{SERVER_PORT} !^81$ RewriteRule (.*) https://%{HTTP_HOST}:81%{REQUEST_URI} [R] Suppose I put the .htaccess in the folder 'secfolder'. When accessing http://domain2.org/secfolder this gets succesfully rewritten to https://domain2.org:81/secfolder. ISSUE: When accessing https://domain2.org/secfolder (without port 81), the certificate from the first vhost (domain1.org) is used and the browser complains that the site is insecure because the certificate is not valid for domain2.org. I thought that RewriteCond %{SERVER_PORT} !^81$ would also rewrite https://domain2.org to https://domain2.org:81, but it doesn't. It seems that the .htaccess file is not being used at all in this case. At this point I am not sure how to apply a RewriteRule to https://domain2.org. I tried creating an additional vhost for domain2 on port 443 before the one for domain1.org, but Apache seems to choke on that. I hope someone of you has an idea how to approach this. TIA.

    Read the article

  • Need Info on the Hidden Switch in SET - "/S" How to implement

    - by ttyl
    I am having some problems doing a proper search of "SET/S" or "SET /S" on google and other search providers. The difficulty arises with the SLASH "/", it is commonly used in search engines to add a "nearness" to the search parameter. I have found no way to escape the SLASH when searching for a SLASH. For those on this community, try searching this domain with the two search terms listed above. It just doesn't work, it ends up looking for SET S instead. But I digress. So Im asking the uber-guru's on this board to help me find out about the documentation of /S and how to implement SET /S in a batch file. SET is an internal DOS/cmd commandand allows many things incuding prompting the user, integer math and writing environment strings. in looking at this link: http://www.robvanderwoude.com/os2set.php it appears that the /S is only for OS2 but im thinking that this might not be the case, due to this: http://www.dostips.com/forum/viewtopic.php?t=2704, apparently used with substings and macros. any help is much appreciated

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >