Search Results

Search found 30742 results on 1230 pages for 'folder size'.

Page 512/1230 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Second CPU missing of Dual Core

    - by Zardoz
    My Lenovo T61 has a dual core CPU. I just noticed that under Ubuntu 10.10 only one CPU is recognized. I know that once both CPUs worked. Not sure since when the second CPU is missing. Maybe since the last kernel update. Currently I am using linux-image-2.6.35-23-generic (for x86_64). What can I do to enable the second CPU again? Here the ouput of /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T8100 @ 2.10GHz stepping : 6 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 4189.99 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Any help is welcome. I really need that CPU power for my work here.

    Read the article

  • allow spoofing when using tun

    - by Johnny
    I have a working openvpn setup with a server and a number of clients. How would i go around allowing IP spoofing through the openvpn server? (to demonstrate security concepts)? A normal ping from client to server goes through all right: root@client: hping3 10.8.0.1 HPING 10.8.0.1 (tun0 10.8.0.1): NO FLAGS are set, 40 headers + 0 data bytes len=40 ip=10.8.0.1 ttl=64 DF id=0 sport=0 flags=RA seq=0 win=0 rtt=124.7 ms root@server:/etc/openvpn# tcpdump -n -i tun0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes 10:17:51.734167 IP 10.8.0.6.2146 > 10.8.0.1.0: Flags [], win 512, length 0 But when spoofing a packet, it does not arrive at the openvpn server: root@client: hping3 -a 10.0.8.120 10.8.0.1 HPING 10.8.0.1 (tun0 10.8.0.1): NO FLAGS are set, 40 headers + 0 data bytes root@server:/etc/openvpn# tcpdump -n -i tun0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tun0, link-type RAW (Raw IP), capture size 65535 bytes My current config files server.conf local X.Y.Z.P port 80 proto tcp dev tun ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun persist-local-ip status openvpn-status.log verb 3 client.conf client dev tun proto tcp remote MYHOST..amazonaws.com 80 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3

    Read the article

  • How can I create a separate toolbar from the Task Bar?

    - by Iszi
    In Windows XP, you could separate toolbars from the Task Bar by dragging them to the desktop. They could then be left lying about anywhere on your screen or, my preferred option, docked to any side of the screen. I found this particularly useful to keep a handy list of common phone numbers quickly accessible. I'd create a new toolbar pointing to a custom folder, and put a bunch of dead shortcuts in the folder that had names and numbers as their file names. I'd then dock the toolbar to the left side, set it to auto-hide and always on top (options which could be set separate from the Task Bar as well) and it would be readily available no matter what else I was doing on my system. However, on my Windows 7 system, I seem unable to perform the crucial step of pulling the new toolbar off of the Task Bar. This is of course with the Task Bar "unlocked" so that I can move all my toolbars around. Is there something I'm missing here, or is this a feature that's been disabled in Windows 7? Is there any way to re-enable it, or otherwise achieve similar functionality? I'd rather be able to do this without additional software, if possible.

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • How can I tell why I have access to a file share on Windows Server

    - by Joel
    I have a file share on a Windows 2008 R2 server in a AD domain (call it \SECURESERVER\STUFF) and I am not sure if I have the share and folder permissions set up right. I noticed the problem when I set up new server (WORKGROUP\FOREIGNSERVER) that was not joined to the domain and tried to copy some files off of \SECURESERVER\STUFF. I was surprised to find that when I tried to access the files, it did not prompt me for a username and password and proceeded to give me full access to the files. That worried me so I tried the same thing on some workstations that were not in the domain and they did NOT have the same behavior (they did prompt for a username/password as desired/expected). So, I think there is something peculiar about FOREIGNSERVER. I am logging into it with a local admin account, but my domain and SECURESERVER should know nothing of this server. I've carefully gone through the share and folder permissions on the share but I can't find the reason that FOREIGNSERVER has access. How can I find out why FOREIGNSERVER has access to SECURESERVER?

    Read the article

  • how is the the linux console displayed to the user and how does the user go about changing the conso

    - by Chris
    I've been searching for the last two day on trying to understand how the console displays itself to the user and how to change the console settings. I've had some luck along the way but nothing that I've found has giving me a real clear explanation of how the console is displayed or how to change or control it's display settings. Some examples that of what I'm looking for are as follows: How is the console displayed on the screen? I know with X11 it uses your graphics card driver to display graphics to the screen, but how is the consoles text mode handled? Could some one ether explain this to me or point me to an in-depth overview of it all? Is it possible to have multi-head support in console mode with separate tty's on each screen? If so how would I go about setting this up? How would you go about changing the size of the console display from the default 80x25 to a custom size? I'm testing anything I find on a debian testing build, which is just the minimal base install on a virtual box. In time I will be using this information to setup my main system which is multi-head with 3 monitors. I would like to be able to support all three displays in console mode if possible.

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • Remote Desktop Services Licensing - Does server have to have a RDS role?

    - by transistor1
    I recently set up a "micro" size Windows 2008 Datacenter server on Amazon AWS. My small group needs several concurrent RDS users to be able to access the machine. Without installing the "Remote Desktop Server" role, it allows 2 concurrent connections. I read on MS' website that in order to set up multiple users, we needed to install the RDS role. I did so, but now the application we are trying to share is running much slower than it was before. Prior to the role installation, it was taking about 5 seconds to open; now it is taking a few minutes to open -- without any other users logged on except me. My assumption is that the RDS role may be too much for this micro instance to handle, and currently, changing to another size instance is not an option (it may be possible later if we were to receive enough funding). This leads me to the following questions: 1) Is it a sensible assessment to assume that it is the RDS role is slowing things down, or are there other things that I could look at to speed it up? We are talking about a machine with ~600MB of memory. 2) If I revert back to the pre-RDS role, is there any legitimate way (in terms of purchasing RDS licenses) to get more than 2 concurrent desktops? I did read this, and am not questioning that the answerer is knowlegeable; but someone else may have some other experience. I am also making it clear that we want to do this in a legitimate way. Thanks in advance for any assistance that can be provided! EDIT: if it is helpful in answering the question, the application in question is a Lotus Approach database. Also, I am asking this from a technical perspective: not a legal one. I want to know if it is possible to install valid licenses without the RDS role.

    Read the article

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • User profile service fails

    - by s.r.a
    I have Windows 7 and 3 drives on my HDD. The second drive is D:\, and there are some files in that. I decided to install 8.1 Enterprise so I installed it in dual boot manner beside 7 and in D:\ drive which as I said was not empty and when installing 8.1, I didn't format the D:. I installed 8.1 successfully in D:\ and it was working fine. One time which I came up with 7, I thought I should arrange the 8.1 folders in D: to be separated from the other non-8.1 folders, so I created a new folder named it "Windows 8.1" and cut all 8.1 folders and pasted them into that new folder. Now my D: drive was arranged. When I restart the PC, I selected the 8.1 to start with, but it didn't come up like before and instead, it shows now a blue screen (not the blue screen of death!) and the time is shown in left-down corner of it. When I click the screen this message appears: The User Profile Service service Failed the sign-in. User Profile can not be loaded. I know two things: 1- The problem is to do with that cutting and pasting the 8.1 folders to be arranged. And 2- If I reinstall the 8.1, the problem will be solved (but if I don't do that cutting and pasting again!) Is there any simpler way to solve the issue and have the two OSs with each other?

    Read the article

  • Uninstall Glassfish and metro completely

    - by user775829
    I thought of updating my Glassfish server from 2.1 to 3.1.1 in a Linux machine. I downloaded the .ZIP package. However during uninstalling of Glassfish v2.1 I did not find the uninstall.sh file in "bin" directory. Following are a few steps which I did... I removed the glassfish folder (rm -rf ...) After removing files in the end it gave me a notification that it could not remove 2 files used by Metro. I cant recollect those file names, but I manually deleted that folder. I made a mistake by first not uninstalling Metro. I uninstalled metro completely after that. but it seemed pointless (it uninstalled successfully :P ) I transfered the Glassfish 3.1.1 ZIP file and unzipped and configured it. FOllow are a few Problems I am facing I cannot deploy any of my WAR file. Its giving errors saying " Error creating bean,Instantiation of bean failed etc etc." (However the WAR file is getting deployed successfully in other Linux Machine) When I try installing Metro v2.1 separately, it does not show the admin console or it timesout while starting the domain. The Log File of the Domain says it has started the domain successfully and the process is also created. But after running the command (asadmin) it takes like forever and times out without showing Domain Started Successfully, There is no uninstall.sh in Glassfishv3.1.1 bin directory. How do I completely uninstall Glassfish v 3.1.1 and Metro 2.1 ??? What are the files which I will have to manually remove?

    Read the article

  • Exchange migration: ExchangeTransport warnings after uninstalling source server

    - by carlpett
    After disabling/uninstalling Exchange from our source SBS2003 server, I'm getting these warnings: Event 5020 "The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003 sourceserver.domain.local in Routing Group [...]" Event 5006 "Cannot find route to Mailbox Server CN=SOURCESERVER [...] for store CN=[...]", for Public folder, First storage group and Recovery storage group. I followed the technet article here: http://technet.microsoft.com/en-us/library/bb288905.aspx (linked from the SBS 2003 - 2011 migration guide). When uninstalling Exchange, I got a warning about NNTP not being found in the registry, but that didn't seem relevant, and the uninstall continued. The server was subsequently removed from the domain and shut down, as per the instructions. If I open the Public Folder Management console on the Exchange 2010 server, the public folders \NON_IPM_SUBTREE\EFORMS_REGISTRY and \Archived mails gives an error on "Update content". I haven't found anything else which indicates something is wrong. We never really used the public folders on the old server, so there isn't really anything lost. Can I just remove these folders and let them be created anew?

    Read the article

  • Adding a transaction ID to ruby-on-rails logs

    - by Blue Warrior NFB
    We have a RoR app (rails version 3.2.15 right now). As it has been getting busier, the log-files it's producing are becoming less and less useful for troubleshooting. When they come in like this, it's not a problem: Started GET "/accounts/28088166/kittens/22894/rendered_png?file_id=5d3eaec77954a489b5ddd75143091767&kitten_store_id=9970569bbacf7b6dbeb4eb9295960d69&size=large" for 172.16.202.30 at 2013-11-12 13:45:00 +0000 Processing by KittenController#rendered_png as HTML Parameters: {"file_id"="5d3eaec77954a489b5ddd75143091767", "kitten_store_id"="9970569bbacf7b6dbeb4eb9295960d69", "size"="large", "kitten_cam_id"="280941", "id"="kjlak357aw479607t"} Rendered text template (0.0ms) Sent data (1.8ms) Completed 200 OK in 1037.4ms (Views: 1.4ms | ActiveRecord: 98.4ms) Short request, quickly assembled, all the relevant log-lines are in one block. However, not all of our code renders in 1037ms. There are a few calls that can exceed several seconds, and during that time several of these quicker ones can come in. When that happens, its very, very hard to identify which log-lines belong to which GET. Sent data (4.1ms) Completed 200 OK in 767.4ms (Views: 3.2ms | ActiveRecord: 72.2ms) Completed 200 OK in 2338.0ms (Views: 0.2ms | ActiveRecord: 0.0ms) Ooookaaaay... which goes to what? Is it possible to add something like a transaction-ID to these log-lines? The log-spam would be interspersed, but at least grep-magic would give me the unified entries that I need.

    Read the article

  • map linux drives to windwos 7 for media stream over internet

    - by Ortix92
    I'm trying to map a linux network drive to my windows 7 laptop, however this laptop is not on LAN. At home, I simply use Samba, but this obviously won't work over the internet. I'm trying to avoid VPN, so if there are other solutions, I would like to know about them. The reason I ask is because my university does this as well. We can simply map folders to our computers without VPN connections. I'm not sure what they are running as servers. The main reason is because I want to be able to access my files stored on my home server wherever I go. They are located in the /home/ folder (videos, music and pictures folder). I'm trying to keep my websites and media separate from each other. I wouldn't mind accessing them from a web interface either, but I would like to keep the directory structure intact. I remember having an app like that come with winamp and running it on my windows pc (As the server). Unfortunately it doesn't work for linux. Any ideas on what I could use? Would XBMC be able to help me out with this? I did do some researching but I couldn't find any concrete answers

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • Files showing in smbclient but not smbmount

    - by Staale
    I have a samba folder that I try and access through smbclient, and I can browse it just fine. However, mounting it through smbmount, all the folders under the share are empty. I can list the folders directly under the share fine, but they all appear empty. smbclient: # smbclient //server/share -U username -W workgroup password smbmount # sudo smbmount //server/share mntpoint -o user=username,workgroup=workgroup,password=password I have also tried with domain=workgroup instead of workgroup, both give the same result. No error messages, everything mounts fine, but all the folders under mntpoint are empty, despite the same folders being non-empty when using smbclient. Are these using different libraries? How can I debug the error? Additionally, if I try to mount //server/share/folder, doing an ls results in a segmentation fault. Using dmesg I find: kernel BUG at /build/buildd/linux-2.6.28/fs/cifs/cifs_dfs_ref.c:315! Full trace: http://pastebin.com/m70adc213 Using a credentials file, I first get empty dirs, then Resource temporarily unavailable. In my dmesg I see the following output: CIFS VFS: compose_mount_options: Failed to resolve server part of \\srv\share to IP: -11

    Read the article

  • Last step in HDD Recovery (fixing windows)

    - by Atom Computing
    My dad’s hard drive corrupted which was a result of many bad sectors. Anyway, I made a clone of the drive and have now repaired it totally (recreating the MBR and MFT) and doing a series of ChkDsk's on it. I can now see all the files and folder on it and it is all intact. I currently have it as a slave in my computer (where I was doing all the repairs). When putting it back into the computer, it comes up with "A disk read error occurred: Press Ctrl + Alt + Del to Restart". I don’t know why this is happening but think it might have something to do with file permissions. I have tried a start-up recovery on the Vista boot CD and it found no problems. When trying to apply file permissions (and creating file perms for the SYSTEM group (as it didn’t have any for SYSTEM group)) it couldn't apply them for some of the System32 folder files. I have tried applying them as admin and with as powerful privileges I can get. All to no avail. When it is in my PC I can boot it up (I added it into my bootloader) and it boots up fine except when it logs in it comes up with the error - "Rundll32.exe - Windows cannot access the specified device, path or file. You may not have the appropriate permissions to access the item" This message keeps coming back and nothing loads at all. Any help would be greatly received as I have got so far with the data recovery and want to avoid a reformat at all costs due to the vast number of programs installed and I don’t have much time on my hands! Thanks

    Read the article

  • Permissions Issue with Files Generated by PerfMon

    - by SvrGuy
    We are trying to implement some data logging to CSV files using a Data Collector Set in PerfMon (on a windows Server 2008R2 system). The issue we are running into is that we (seemingly) can't control the permissions being set on the log files created by perfmon. What we want is for the log files created by perfmon to have Everyone:F permissions (Full Control for Everyone). So, we have a directory structure setup where all logs go into a folder: c:\vms\PerfMonLogs\%MACHINENAME% (e.g. c:\vms\PerfMonLogs\EvaluationG2) In the above example, c:\vms\PerfMonLogs\EvaluationG2 has permissions Everyone:F (below is the icacls for this directory) EVALUATIONG2/ Everyone:(OI)(CI)(F) NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) When the data collector set runs, it creates new sub folders and files within c:\vms\PerfMonLogs\EvaluationG2, e.g. (C:\vms\PerfMonLogs\EVALUATIONG2\M11d26y2012N3) Each of these directories and files has the following permissions: M11d26y2012N3 NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) So these new folders and not simply inheriting permissions from the parent folder (don't know why). Now, we tried adding Everyone:F using the security tab on the collector set (No dice). Any ideas? How do we control the permissions on the log files generated by perfmon data collector set?

    Read the article

  • Windows preventing running of Telnet client

    - by palswim
    At first, I had issues because Windows 7 doesn't install the Telnet client by default (also, SuperUser has a thread). So, after installing it (and restarting, like Windows asked, though completely unnecessary), I opened a command prompt, and went to run my new Telnet program. I enter telnet, and receive: C:\Users\[USER]>telnet 'telnet' is not recognized as an internal or external command, operable program or batch file. "That's odd," I think to myself. So, in Windows explorer, I navigate to \Windows\System32 and see telnet.exe sitting in that folder. If I double-click on the executable file, the Telnet command prompt opens for me without a problem. So, I return to my Windows Command Prompt, and enter: C:\Users\[USER]>\Windows\System32\telnet.exe '\Windows\System32\telnet.exe' is not recognized as an internal or external command, operable program or batch file. And then (grep comes from cygwin): C:\Users\ryan\Desktop>dir \Windows\System32 | grep telnet Nothing. I've disabled UAC and have no idea why my Command Prompt is lying to me. Anyone experience something similar? To recap: In Windows 7, I have installed Telnet and can see it in my System32 folder, but cannot run it via a Command Prompt.

    Read the article

  • Input/output error reading USB backup drive on CentOS 6.4

    - by Kev
    I'm suddenly seeing some strange behaviour on our USB backup drive that doesn't make sense to me: (2013-10-21 14:58:23 [root@newdc /]$ cd /mnt/backup/ (2013-10-21 14:59:03 [root@newdc backup]$ ls -la ls: reading directory .: Input/output error total 0 (2013-10-21 14:59:05 [root@newdc backup]$ df -h /mnt/backup Filesystem Size Used Avail Use% Mounted on /dev/sda1 917G 843G 28G 97% /mnt/backup How is it possible for the OS to know how much is in use, but I can't ls any of it as root? Or more to the point, what problem does this indicate? /var/log/messages said this: Oct 21 14:57:54 g5 kernel: EXT4-fs error (device sda1): ext4_journal_start_sb: Detected aborted journal Oct 21 14:57:54 g5 kernel: EXT4-fs (sda1): Remounting filesystem read-only But...read-only is something different than 'throw an io error'... After unmounting to try fsck on it, I had someone on-site look at it, and the drive was not spun up, and had a slow-flashing light, which I believe means it was in a power-suspend mode. So I had them unplug and replug the USB cable, and now (before remounting) it says: fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) /dev/sda1: clean, 2805106/61046784 files, 181934167/244182016 blocks I then mount it and now ls works and df reports: Filesystem Size Used Avail Use% Mounted on /dev/sda1 917G 680G 191G 79% /mnt/backup What would cause it to go into such a state without being asked to? Why all the weird behaviour, and now it appears to not be corrupt?

    Read the article

  • Distributing Microsoft Office Template or Macro over the network

    - by zfranciscus
    We have around 400 users who use Word and we want to make their life easier by distributing templates and macros over the network. The easiest way to do this of course to setup a shared network folder and let them get the appropriate templates and macros. Of course, each user has to know where to copy these files to in their local PC, and we have to rely on constant email communication to let them know for newer version of the macro and templates. The next alternative is to ask them to configure Word to point to these network folder. But of course any disruption to the network means disruption to their work. We are thinking to setup a synchronization mechanism that downloads new templates to their local machine. We are also thinking to make this sync tool to prompt users that it will download new templates - you know just to give them visibility that they are receiving changes. We are wondering what is the best approach that people usually use in their workplaces ? Are there any specific tool that can make this task easier ?

    Read the article

  • Boot stuck at blinking cursor before GRUB - only works via BIOS boot menu

    - by delta1
    I have a new box running Debian Squeeze. Grub is installed on /dev/sda, but when booting up I just get a blinking cursor, before the Grub menu. I can only boot to grub successfully when I choose boot options (during post) and select that specific drive! I have made sure the correct drive is set to boot first in the BIOS. So Grub works, but the system won't boot to that drive automatically? Any ideas on what could cause this? Drives sda/b/c are all 2TB (sda runs the system with b/c as raid device md0) with the following partitions: $ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 977 sda1 8 2 9765625 sda2 8 3 6445313 sda3 8 4 1937302627 sda4 8 32 1953514584 sdc 8 16 1953514584 sdb 9 0 1953513424 md0 but # fdisk -l /dev/sda gives WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 243202 1953514583+ ee GPT Any insight into this strange behaviour would be greatly appreciated.

    Read the article

  • Drive security settings in Windows 8 Pro

    - by Donotalo
    My PC OS is Windows 8 Pro x64. Windows 8 seems confusing. D:\ drive is supposed to be used solely by a single user, who is in Users group of the PC. The requirement is... that user will have full control of D drive. Admins will have full control of D drive. All other users can only list drive contents. No file could be opened. My account is admin account. From D drive's property Security tab, I've set the following: Allow "List folder contents" for Authenticated Users group. Allow "Full control" for SYSTEM. Allow "Full control" to specific user, who's supposed to use the drive. Allow "Full control" for Administrators group of the computer. Allow "List folder contents" for Users group. After setting this up, the specific user have full control of D drive. No other user can open any file on D drive. But though my account is an admin account, no file on D drive could be opened from my account! Why is this happening and how files can be opened from my account? Note: All accounts in this PC are local accounts.

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >