Search Results

Search found 13070 results on 523 pages for 'simply tom'.

Page 374/523 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • Incredble low disk performance on HP DL385 G7

    - by 3molo
    Hi, As a test of the Opteron processor family, I bought a HP DL385 G7 6128 with HP Smart Array P410i Controller - no memory. The machine has 20GB ram 2x146GB 15k rpm SAS + 2x250GB SATA2, both in Raid 1 configurations. I run Vmware ESXi 4.1. Problem: Even with one virtual machine only, tried Linux 2.6/Windows server 2008/Windows 7, the VMs' feel really sluggish. With windows 7, the vmware converter installation even timed out. Tried both SATA and SAS disks and SATA disks are nearly unsusable, while SAS disks feels extremely slow.I can't see a lot of disk activity in the infrastructure client, but I haven't been looking for causes or even tried diagnostics because I have a feeling that it's either because of the cheap raid controller - or simply because of the lack of memory for it. Despite the problems, I continued and installed a virtual machine that serves a key function, so it's not easy to take it down and run diagnostics. Would very much like to know what you guys have to say of it, is it more likely to be a problem with the controller/disks or is it low performance because of budget components? Thanks in advance,

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • My SSD stopped working as it ends up in blue death right after windows 7 launches, how can I reset it for new windows install?

    - by HattoriHanzo
    As I mentioned in the title my SSD suddenly started to fail as after launching Windows 7 it goes straight to completely idle and after a few minutes it goes to blue death and restarts. I have an another HD with a windows xp on it and it works fine and I can also see the SSD and can access everything on it. Windows 7 on the SSD does work in safe mode though yet I didn't manage to find out what causes the problem. Since I can sill access the files and save them to another HD I'm looking for the best way to wipe and reinstall Windows 7. I have yet failed to find an easy to follow (or even understand) guide, different sources on the internet recommend different ways of doing this. And some guides are just simply full of terms I have no clue about. It's an OCZ Vertex 2 120GB. I"d very much appreciate if someone could give me an advice on what I should do, preferably in a way so I could regain the best possible performance as well. it doesn't matter if I don't understand the science behind it as long as I can follow the steps. Thanks!

    Read the article

  • Multiple Rails apps on same subdomain?

    - by Derek
    I recently decided to try out Rails. When working with PHP, I simply had all of my PHP projects in the same directory. For example, I may have http://ubuntu/app1, http://ubuntu/app2, etc. I created a subdomain for Rails (http://ruby.ubuntu), installed Rails and Passenger and everything is working. However, I may be wrong, but it looks like I can only have one Rails app per subdomain? My VirtualHost is as follows: <VirtualHost *:80> ServerName ruby.ubuntu ServerAdmin webmaster@localhost DocumentRoot /var/www/ruby/blog/public <Directory /var/www/ruby/blog/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all RailsEnv development </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All of my PHP and misc. files are stored in /var/www/main. I want to be able to store all of my Rails apps in /var/www/ruby. I tried changing DocumentRoot to /var/www/ruby, but I don't think it's as simple as that. When I browse to a Rails app's Welcome Aboard page and click on "About my application's environment," I get a 404 page, but when the DocumentRoot is set to the public directory, I get the expected result. I don't want to have to create a new subdomain every time I create a new project. Is there any way I can make it so I can store all of my apps in /var/www/ruby, and browsing to http://ruby.ubuntu will let me access all of my Rails apps there? That way if I want to create a new app, all I have to do is rails new app, no Apache .htaccess or VirtualHost configuration required.

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • Monitoring instantaneous network throughput at one second intervals?

    - by Shaddi
    For a testing setup I have, I need to monitor the throughput through a "router"* at regular intervals of around 5 seconds or less (sub-second intervals would be very nice, but not required). Ideally, I would be able to generate a file which contained both the number of bytes and packets seen during each interval. I will eventually be generating a time-series of throughput from this data. On a previous setup using an older version of FreeBSD, there was a tool called "bpfmon" which gave me this information. However, I need to do this under a modern version of Linux (namely, Ubuntu 11.04). I have looked at both iptraf and iftop, but these do not appear to provide the resolution I need, nor do they seem to easily allow scraping the data I need. I understand iptables statistics may be able to give me what I'm after, but the examples I've seen of this seem to rely on repeatedly reading and resetting traffic counters, which seems like it could give inaccurate as read/reset is not an atomic operation. I already capture a tcpdump trace of the traffic I'm interested in on the link I want to monitor, so I am open to approaches which simply parse that. I feel like this must be a common problem though, so I am hoping there will be a standard "best practice" tool for accomplishing this. *I say "router" in quotes because I am really talking about a machine with two bridged NICs through which all the traffic I'm interested in passes.

    Read the article

  • one 16K random read I/O issues 2 scsi I/O (16K and 4K) requests in linux

    - by hiroyuki
    I noticed weird issue when benchmarking random read I/O for files in linux (2.6.18). The Benchmarking program is my own program and it simply keeps reading 16KB of a file from a random offset. I traced I/O behavior at system call level and scsi level by systemtap and I noticed that one 16KB sysread issues 2 scsi I/Os as following. SYSPREAD random(8472) 3, 0x16fc5200, 16384, 128137183232 SCSI random(8472) 0 1 0 0 start-sector: 226321183 size: 4096 bufflen 4096 FROM_DEVICE 1354354008068009 SCSI random(8472) 0 1 0 0 start-sector: 226323431 size: 16384 bufflen 16384 FROM_DEVICE 1354354008075927 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 21807710208 SCSI random(8472) 0 1 0 0 start-sector: 1889888935 size: 4096 bufflen 4096 FROM_DEVICE 1354354008085128 SCSI random(8472) 0 1 0 0 start-sector: 1889891823 size: 16384 bufflen 16384 FROM_DEVICE 1354354008097161 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 139365318656 SCSI random(8472) 0 1 0 0 start-sector: 254092663 size: 4096 bufflen 4096 FROM_DEVICE 1354354008100633 SCSI random(8472) 0 1 0 0 start-sector: 254094879 size: 16384 bufflen 16384 FROM_DEVICE 1354354008111723 SYSPREAD random(8472) 3, 0x16fc5200, 16384, 60304424960 SCSI random(8472) 0 1 0 0 start-sector: 58119807 size: 4096 bufflen 4096 FROM_DEVICE 1354354008120469 SCSI random(8472) 0 1 0 0 start-sector: 58125415 size: 16384 bufflen 16384 FROM_DEVICE 1354354008126343 As shown above, one 16KB pread issues 2 scsi I/Os. (I traced scsi io dispatching with probe scsi.iodispatching. Please ignore values except for start-sector and size.) One scsi I/O is 16KB I/O as requested from the application and it's OK. The thing is the other 4KB I/O which I don't know why linux issues that I/O. of course, I/O performance is degraded by the weired 4KB I/O and I am having trouble. I also use fio (famous I/O benchmark tool) and noticed the same issue, so it's not from the application. Does anybody know what is going on ? Any comments or advices are appreciated. Thanks

    Read the article

  • Can't access random web pages on my MacBook Pro 2012?

    - by Faruk Sahin
    Sometimes I can't access random web pages. The page simply doesn't load. If I wait for around a minute doing nothing, it will load. It happens very random and very intermittent. Sometimes it starts when I try to access youtube.com or cnn.com. When it starts, it happens once in a minute or once in 5 minutes for random web pages. But if I am downloading something, the download continues without any interruption. And also I am able to ping the address I can't browse. Then if I wait for around a minute, everything starts to work fine at the browser side also. I have tried a lot of different browsers. I have tried changing my DNS servers to Google's public DNS servers. Using a cable instead of the wireless connection doesn't work either. No one else in the network has this problem, but me. What can be the problem?

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • Users and Groups management on 7 Home Premium

    - by AviD
    Recently upgraded the home pc from XP pro, to Windows 7 Home Premium. I'm looking for a solution for a few things that seem to be missing from this edition... Since Local Users and Groups is blocked on Home Premium, I can't figure out how to manage groups, or even do anything even slightly advanced to users (basically, create/group/picture is it). net localgroup, net users, net etc dont seem to work - getting "system error 5". While I'm on the topic, I cant activate (what was once) "Local Security Policy"... Looking for any help, advice, or even a new direction cuz things is differ'nt on Winnows7... To clarify, I'm looking to do some of the following, which were simply back in XP-land: remote user only (i.e. no local logon) Grant special privileges for specific user grant access to e.g. C$ share for specific remote user create custom groups for users, to be able to separate privileges of say, my wife's from my kids define quite specifically what each user can do (beyond just standard users) Harden OS (hmm, i guess maybe what i'm looking for is security hardening guide for 7...?)

    Read the article

  • Why is there no /usr/bin/ in windows? Would it be dangerous to the entire Program Files to the path?

    - by dotancohen
    I am a Linux user spending some time in Windows and I'm trying to understand some of the Windows paradigms instead of fighting them. I notice that each program installed in the traditional manner (i.e. via orgasmic installers: Yes, Yes, Yes, Finish) adds the executables to C:/Program Files/foo/bar.exe and then adds a shortcut to the Desktop / Start Menu containing the entire path. However, there is no common directory with links to the software, i.e. C:/bin/bar.exe which would link to C:/Program Files/foo/bar.exe. Therefore, after installing an application the only way to use the application is via the clicky-clicky menus or by navigating to the executable in the filesystem. One cannot simply Win-R to open the run dialogue and then type bar or bar.exe as is possible with notepad or mspaint. I realize that Windows 8 improves on this with the otherwise horrendous Start Screen which does support typing the name of the app, but again this depends on the app having registered itself for such. Would I be doing any harm by adding C:/Program Files recursively to the Windows path? I do realize that there will be name collisions (i.e. uninstall.exe) but could there be other issues?

    Read the article

  • Sending eMails in a external subnet in vmware ESXi

    - by user80658
    This might be a bit hard for me to explain - and it is a pretty individual situation. I got a native server at Hetzner (www.hetzner.de). The public IP is 88.[...].12. I got ESXi running on this server. I can access the esxi console by the public ip, but none of the virtual machines. That's why I bought a public subnet with 8 (6 usable) IPs (46.[...]) and an additional public ip (88.[...].26). This additional public ip belongs to the first virtual maschine - a firewall appliance - which is connected to the WAN. This need to be done this way - since it is the official way by hetzner. My 46. subnet is behind the firewall. I got a virtualmin server with dovecot imap/pop3 server. When sending a email, most provider (gmail) will accept those mails, but a lot will put it into spam (aol). My theory is: The MX line of my domain says of course the ip of the virtual machine (46.[...]), but in the raw email it says that email is sent by the ip of the firewall (88.[...].26), which doesnt sound trustworthy. A solution would be if the firewall could handle mail, but it simply cant. How can I prevent this problem? Thanks.

    Read the article

  • How do you backup your own files? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • Computer does not boot, often

    - by tam
    I've ran into a issue with my computer that it does no longer reach POST, but simply powers on for a fraction of a second and powers off. But this is not always, some times it boots just normally and it works as it should, no issues with not enough power or anything. But as soon as I turn it of, I can not turn it back on, but then again at some random point it just powers up again, and resumes normal operation. If I disconnect the 8pin ATX connector from the motherboard, it powers up, fans and disks spinning normally until I power it off again. So this problem only happens when ATX is connected, which seems odd, I normally always saw this kind of an error if ATX was not connected, but here it's the exact opposite. It also does not emit any sound on the buzzer, except the normal beep, when it powers up normally. I have already tried: Remove graphics card Remove one and/or all RAM sticks Disconnect everything non-essential, even hard drives Clear CMOS I have not yet tried to remove all components and tried to boot everything outside of the case, because I did not have the time to disassemble and bleed the water loop. However, I can confirm that nothing is stuck underneath the motherboard, not is any of those brass raisers touching the board where it should not. Specs: Gigabyte GA-970A-UD3 AMD FX6300 ATI HD7850 I think this should be enough for this issue.

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    Dear ladies and sirs. The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G

    Read the article

  • Can't find partition tab in disk utility osX ver. 10.6.8

    - by John W
    I just got a used Mac Book Pro. I created a new admin account and deleted the old one as well as one other user. This is an older late 2007 MBP... the osX upgrade to 10.6.8 was just performed. My Macintosh HD is showing up as Partition 2. I ran disk utility (not from install disk), but there was no partition tab. I have a 160GB drive with only 53GB of space left on it. Since I am the only user and have no files on the laptop yet, I don't understand why there is so little space left. Surely the OS can't use up over 100GB. I wanted to run disk utility to see if there were any recovery partitions or other partition left over from the previous owner that could be erased to make room for expanding the main partition. Unfortunately, there is no partition tab in disk utility. The documentation I have found on line states that this version of osX includes that utility. The osX disks I have are for an older version so I wasn't sure if they would be of any use in solving this problem. Also, I was afraid if using the disks, would I lose the little bit of data/apps that I have assembled. I would rather not do a fresh install and have to do all the updates again to achieve this. The previous owner had some apps that I don't want to lose as I would have to pay handsomely to get them back. Simply, if all the previous users data is backed up on here after deleting user is still taking up space on a recovery partition (that I can't see)... I need to locate it erase it and expand the primary partition to re-aquire disk space for my files. I am new to Mac, so please be as descriptive as possible. Thanks.

    Read the article

  • Why is only one Excel spreadsheet crippled, but others are fine?

    - by Dallas
    I have an inherited spreadsheet that I really don't want to rebuild at the moment. It's a simple small workbook that is small (< 200 rows that don't even reach to AA) and does nothing more than calculate some totals within the same worksheets. No macros, no external data sources, nothing beyond basic formatting of dates, numbers and strings. I see importing data from CSV/text has created many many workbook connections over time, but even if I delete them all (there were hundreds) it makes no difference in performance. Even clicking to simply change focus from cell to cell takes 10+ seconds, adorned by the spinning cursor and (Not Responding) appending to the title bar and the application locking up. The program seems to "recover" every time, but efficiency of editing this file is obviously seriously handicapped. All other files seem fine in Excel, and other programs have no apparent performance issues. I see Excel is chewing up CPU but I'm not sure how to narrow down what process or service is "clashing" with Excel. I tried the same file on other computers and performance is fine. If I turn off all start-up services and run only Excel, performance is restored... until I start using other programs and then it bogs down again. At this point, I would entertain almost any idea, theory or suggestion that helps pinpoint, solve or work around the issue.

    Read the article

  • How to secure an Internet-facing Elastic Search implementation in a shared hosting environment?

    - by casperOne
    (Originally asked on StackOverflow, and recommended that I move it here) I've been going over the documentation for Elastic Search and I'm a big fan and I'd like to use it to handle the search for my ASP.NET MVC app. That introduces a few interesting twists, however. If the ASP.NET MVC application was on a dedicated machine, it would be simple to spool up an instance of Elastic Search and use the TCP Transport to connect locally. However, I'm not on a dedicated machine for the ASP.NET MVC application, nor does it look like I'll move to one anytime soon. That leaves hosting Elastic Search on another machine (in the *NIX world) and I would probably go with shared hosting there. One of the biggest things lacking from Elastic Search, however, is the fact that it doesn't support HTTPS and basic authentication out of the box. If it did, then this question wouldn't exist; I'd simply host it somewhere and make sure to have an incredibly secure password and HTTPS enabled (possibly with a self-signed certificate). But that's not the case. That given, what is a good way to expose Elastic Search over the Internet in a secure way? Note, I'm looking for something that hopefully, will not require writing code to provide shims for the methods that I want (in other words, writing forwarders).

    Read the article

  • Enable bitlocker an save key to share

    - by user273694
    I have searched all over the web but cannot find a complete answer to this: How to enable Bitlocker on a laptop with TPM, and store a file with the Bitlocker recovery key and TPM password by USING THE manage-bde command line tool. The file should be the same as when created in the Bitlocker manager UI. I DO NOT want to save to AD. The same question was asked here but was not answered correctly. The goal is to write a script to be used with an endpoint manager. I have tried the following: manage-bde -on C: Works fine, but does not create or save a key. manage-bde -on C: -rk C:\myfolder\ and manage-bde -on C: -RecoveryKey C:\myfolder\ -rp The output from the last two methods state that a key has been saved to c:\myfolder and so on, but that is not the case. It also says that I have to: Save the password in a secure location Insert a USB flash drive with an external key file into the computer. Restart and run hardware test type "manage-bde -status" to check if the hardware test succeeded After a restart, I get an error saying that Bitlocker could not be enabled because the bitlocker startup key or recovery kpassword cannot be found on the USB device.... C: was not encrypted. Why am I asked to insert a USB?? I simply want to encrypt the hard drive and save the recovery information to a file automatically. Is that too much to ask? Help please!

    Read the article

  • Variable Assignment for arguments with a for loop

    - by RainbowDashDC
    Alright, so, I've searched quite a bit on how to do this but I've given up as I simply couldn't find anything. So, I have a code (below); it's main purpose is to get 9 arguments and assign them as a variable-- ignore the echo's and pipping. My question is: How can I simplfy this with a for loop or such so it doesn't take as much code, and if possible, have more than 9 arguments aswell set pkg1=%1 set pkg2=%2 set pkg3=%3 set pkg4=%4 set pkg5=%5 set pkg6=%6 set pkg7=%7 set pkg8=%8 set pkg9=%9 IF DEFINED pkg1 (echo %1.ini 1> %WINGET_TEMP%\args.rdc 2>nul) IF DEFINED pkg2 (echo %2.ini 1>> %WINGET_TEMP%\args.rdc 2>nul) IF DEFINED pkg3 (echo %3.ini 1>> %WINGET_TEMP%\args.rdc 2>nul) IF DEFINED pkg4 (echo %4.ini 1>> %WINGET_TEMP%\args.rdc 2>nul) IF DEFINED pkg5 (echo %5.ini 1>> %WINGET_TEMP%\args.rdc 2>nul) IF DEFINED pkg6 (echo %6.ini 1>> %WINGET_TEMP%\args.rdc 2>nul) IF DEFINED pkg7 (echo %7.ini 1>> %WINGET_TEMP%\args.rdc 2>nul) IF DEFINED pkg8 (echo %8.ini 1>> %WINGET_TEMP%\args.rdc 2>nul) IF DEFINED pkg9 (echo %9.ini 1>> %WINGET_TEMP%\args.rdc 2>nul)

    Read the article

  • Common folder in linux

    - by rks171
    I have two users on my Ubuntu machine. I want to share some media files between these users, so I created a directory in /home/ called 'media'. I made the group 'media' and I added my user 'rks171' to the group 'media'. So: sudo groupadd media sudo mkdir -p /home/media sudo chown -R root.media /home/media sudo chmod g+s /home/media As was described in this post. Then, I added my user to the group: sudo usermod -a -G media rks171 Then I also added write permission to this folder for my group: sudo chmod -R g+w media So now, doing 'ls -lh' gives: drwxrwsr-x 2 root media 4.0K Oct 6 09:46 media I tried to copy pictures to this new directory from my user directory: mv /home/rks171/Pictures/* /home/media/ And I get 'permission denied'. I can't understand what's wrong. If I simply type, 'id', it doesn't show that my user, rks171, is part of the 'media' group. But if I type, 'id rks171', then it does show that my user, rks171, is part of the 'media' group. Anybody have any ideas why I can't get an files into this common folder?

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • Why just splitting an Ethernet cable does not work?

    - by Sin Jeong-hun
    I thought the Ethernet is logically one-line communication bus (for argument's sake, I am excluding hubs). All machines attached in the bus hears the same signals and the machines themselves try to avoid collisions by randomly backing off. http://computer.howstuffworks.com/ethernet6.htm If so, why splitting one Ethernet line from my home router into two and connecting two computers would not work? Why do I have to add a switch to it? *What the Internet said would not work. [4 port home router] ------[one Ethernet cable]-----[simple splitter]======[two computers] *What the Internet said I should do [4 port home router] ------[one Ethernet cable]-----[switch]======[two computers] Is this because of the signal degradation (reduced electric current)? Thank you for all the answers! The reason why I did not just use the two ports of my home router is... The 4-port gigabit router is in my room and I had put a computer in another room (also my room, though). Since wired network is far more reliable and secure, I had bought a long Ethernet cable and and connected the computer to the router. Now I was thinking about adding another computer to that room. I could buy another long Ethernet cable, but then there will be two cables between the rooms. The one line already is a minor annoyance, so I thought if I could share the one line between the two computers in that room. A switch would work, but it requires power and is a little bit pricey. That is why I wondered why it would not work to simply split the physical Ethernet cable. Apparently I do not completely understand how Ethernet and a switch work. I just have some bit of knowledge I heard in my college class.

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >