Search Results

Search found 13180 results on 528 pages for 'non interactive'.

Page 402/528 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Unable to mount root fs over NFS [on hold]

    - by johnmadrak
    I am attempting to set up a Raspberry Pi running Pidora to boot from an NFS share. My configuration in cmdline.txt is: dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs nfsroot=<serverip>:/fake/path,nfsvers=3,rw,nolock nfsrootdebug ip=dhcp elevator=deadline rootwait On the Pi, the output I see is: IP-Config: Got DHCP answer from <router>, my address is <clientip> IP-Config: Complete: device=eth0, hwaddr=<macaddress>, ipaddr=<clientip>, mask=255.255.255.0, gw=<routerip> host=<clientip>, domain=, nis-domain=(none) bootserver=<routerip>, rootserver=<serverip>, rootpath= nameserver0=<routerip> (It pauses for a bit here) VFS: Unable to mount root fs via NFS, trying floppy VFS: Cannot open root device "nfs" or unknown-block(2,0); error -6 Please append a correct "root=" boot option; here are the available partitions: ..... On the NFS Server (an OpenVZ Container), the output I see in the /var/log/messages is: Aug 22 23:24:01 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:783 for /fake/path (/fake/path) Aug 22 23:24:38 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:741 for /fake/path (/fake/path) Aug 22 23:25:25 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:752 for /fake/path (/fake/path) Aug 22 23:26:12 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:876 for /fake/path (/fake/path) To test, I've made sure I can mount (non-root) from both the Pi and another machine and it worked. Does anyone have an idea on what could be wrong or how to narrow it down? Thank you in advanced for your help.

    Read the article

  • OpenVPN-based VPN server on same system it's "protecting": feasible?

    - by Johnny Utahh
    Scenario: hosted machine (typically a VPS) serving wiki, svn, git, forums, email lists (eg: GNU mailman), Bugzilla (etc) privately to < 20 people. People not on team not allowed access. Seeking VPN-restricted access to said server. Have good user experience with OpenVPN-based servers/clients, but have yet to server-admin such systems. Otherwise, experienced Linux sysadmin. Target system: Ubuntu, probably 12.04. Seeking to put an OpenVPN process on above server to "protect" all the above-mentioned services, enabling only OpenVPN-authorized clients/processes to access above services. (Can easily acquire additional IP address(es) as needed for this setup.) Option: if absolutely needed, can employ an additional, dedicated, "VPN server" VPS simply to be my VPN server "front end." But prefer to have all server processes (VPN server plus other server apps) all running on same machine, if possible. Will consider further if dedicated-VPN-machine setup enables 1. easier installation/administration, 2. better/easier end-user experience, and/or 3. makes system significantly more secure. Any of above feasible? The main intention: create a VPN from purely-hosted resources, and not spend all the effort to make a non-VPN, secure site--which typically means "SSL wrapping" + all the continual webserver-application-update management. Let the VPN server deal with access security, and spend list time pushing said security "down" in the other apps/Apache.

    Read the article

  • Trouble printing to local printer when connected to VPN with split-tunneling enabled

    - by Marve
    I'm a volunteer network admin for a multi-tenant non-profit office space. One of our new tenants uses a VPN to connect to remote resources using RRAS and Small Business Server 2008. They also have a local network printer for the workstations in our office. When connected to the VPN, they cannot print to the local printer. I informed their network admin that they need to enable split-tunneling to fix this. Their network admin enabled split-tunneling, but apparently printing still didn't work. He told me that I need to open port 1723 on our office firewall to allow it to work. I'm just a novice administrator and not familiar with RRAS, but this doesn't sound right to me and I haven't been able to find anything on the web to validate it. Additionally, my understanding of split-tunneling is that it is handled entirely by the VPN client and should work irrespective of firewall settings. Is my understanding of the situation incorrect? What steps should I take to resolve this problem?

    Read the article

  • EventID 1058 Code 5, Sysvol is subdir of Sysvol - how to fix?

    - by nulliusinverba
    I have been trying to resolve this error, like many others: The processing of Group Policy failed. Windows attempted to read the file \domain.local\SysVol\domain.local\Policies{3EF90CE1-6908-44EC-A750-F0BA70548600}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following: a) Name Resolution/Network Connectivity to the current domain controller. b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller). c) The Distributed File System (DFS) client has been disabled. Error code: 5 = Access Denied. The incredibly helpful post is this one (http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/A_1073-Diagnosing-and-repairing-Events-1030-and-1058.html). Quoting from this post: HERE IS A LIST OF POTENTIAL PROBLEMS THAT CAN LEAD TO 1030 AND 1058 EVENT ERRORS: --Sometimes the permissions of the file folders that contain Group policies (the Sysvol folder) can be corrupted. --Sometimes you have problems with NetBIOS: --Sometimes the GPO itself is corrupt, or you have a partial set of data for that GPO. --Sometimes you may have problems with File Replication Services, which almost always indicates a problem with DNS --Sysvol may be a subfolder of itself: Sysvol/Sysvol I have the problem listed where sysvol is a subfolder of sysvol. The directory structure is: -sysvol -domain -staging -staging areas -sysvol (shared as "\\server\sysvol") -domain.local -ClientAgent -Policies -scripts Interestingly, the second sysvol folder is the one that is shared as "\server\sysvol". This makes me confident this is the issue with the permissions and error code 5. Also interestingly, my server 2008 R2 servers can see it fine - my server 2008 servers cannot, and get the error. This is consistent across all my servers. This latter fact makes me uncertain what I need to do to fix this up. Do I, e.g., simply move the shared sysvol folder up a level to replace the non-shared one? Any help greatly appreciated. Cheers, Tim.

    Read the article

  • keyboard intermittently stops working, even after reinstalling windows 7; possibly a Chrome issue?

    - by neverskipbreakfast
    My keyboard intermittently stops working. Sometimes a couple of keys will work, but usually none. Sometimes if I mash a couple of the ctrl+alt+windows type keys randomly for a bit, the keyboard will let me type one more letter before stopping again. Sometimes the keys will open a program menu, but usually not. I have even completely wiped my machine and reinstalled windows 7; the problem continues. Specs: Intel iMac (early 2006, 2.0GHz, 2MB RAM, 240GB HD) running ONLY Windows 7 Professional, 32-bit (NOT through boot camp) and using a USB keyboard (Saitek Eclipse II.) *Unplugging & reconnecting keyboard does NOT fix it. *Connecting a different keyboard does NOT fix it. That one won't work, either. *Drivers are up-to-date. Removing and reinstalling drivers does NOT fix it. *Restarting the computer does NOT fix it. In fact, when the Windows logon screen appears, they keyboard won't work, and neither will the icon to pull up the on-screen keyboard. Otherwise my mouse can click around just fine. I can only log onto a non-password protected account. *Generally, logging into as different Windows user fixes it. I can then log back on to my main user account and continue work for a few hours until it happens again. *Clearing my Chrome browsing data stopped the problem from recurring for a week or so. *I use Avira free antivirus software, and repeated scans turn up nothing fishy. *I have already REINSTALLED Windows 7 (not just a restore.) The problem returned after 2 days of use. The only thing I suspect is something in Google Chrome -- I used my google account to reload all my previous Chrome extensions, saved data, etc. (Chrome Extensions Installed: AdBlock, Better Google Tasks, DropBox, FB Photo Zoom, Google Mail Checker, StayFocusd.) Any ideas? Any at all?

    Read the article

  • Asus K50I sound issues

    - by MrStatic
    I have an Asus K50IJ (Bestbuy) laptop and have issues with my sound. Speakers themselves work fine but when I plug into the headphone jack it auto mutes the front channel and no sounds comes out of either the speakers or the headphones. If I then unmute the channel I get sound from both the speakers and the headphones. alsamixer shows the Headphone channel as all grayed out. /etc/modprobe.d/alsa-base.conf I have tried snd-hda-intel model="asus-laptop" and snd-hda-intel model="asus" In Sound Preferences I have gone to output and changed the Connector to 'Analog Headphones' that results in no sound from either speakers or headphones. As one forum suggested I tried to comment out blacklist snd_pcsp in the blacklist.conf which resulted in no change. lspci -v shows: 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) Subsystem: Santa Cruz Operation Device 1043 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at fe9f4000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Exchange 2010 Transport rules stepping on each other

    - by TopHat
    I have a group of users that I have to restrict email access for and so far using Exchange Transport Rules has worked very well. The problem I am having is that Rule 0 is supposed to bcc the email to a review mailbox but otherwise not change anything and Rule 9 is supposed to block the email and throw a custom NDR to tell the user why they were blocked. Here are my results in practice however. If Rule 0 is enabled and Rule 9 is enabled then only Rule 9 functions If Rule 0 is disabled and Rule 9 is enabled then Rule 9 functions If Rule 0 is enabled and Rule 9 is disabled then Rule 0 functions This is after the Transport Service has been restarted (multiple times actually). I have other rule pairs that work correctly. None of these are overlapping rulesets however. - copy email going to address outside domain and then block - copy email coming in from outside and then block Here is the rule for copying internal emails (Rule 0): Apply rule to messages from a member of Blind carbon copy (Bcc) the message to except when the message is sent to a member of or [email protected] Here is the rule to block the same email (rule 9): Apply rule to messages from a member of send 'Email to non-supervisors or managers has been prohibited. Please contact your supervisor for more information.' to sender with 5.7.420 except when the message is sent to , [email protected], The distribution group used for membership in these rules is used for the other blocking and copying rules and works as expected. Is there something I missed in this setup? All of the copy rules are at the front of the transport rule group and all the actual copies at at the end of the queue if that makes a difference. Any thoughts as to why the email doesn't get copied when it gets blocked?

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Missing whole disk device in OpenSolaris

    - by Jeff Mc
    I have begun experimenting with Solaris and ZFS as a NAS. All was going very smoothly until I had a drive failure. When I replaced the drive, I no longer have a device file mapped to the whole disk. /dev/dsk/c7t3d0 does not exist but c7t2d0 and c7t4d0 both do. Also the sd@3,0:wd file under the /devices/ tree is non-existent. Do I have to prepare/partition the disk somehow to cause the whole disk device to exist? Here are a few outputs that might be useful. jeffmc@ats-ds2:/dev/dsk$ zpool status pool: datapool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM datapool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 UNAVAIL 0 0 0 cannot open mirror-1 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 jeffmc@ats-ds2:/dev/dsk$ zpool replace datapool c7t3d0 cannot open 'c7t3d0': no such device in /dev/dsk must be a full path or shorthand device name jeffmc@ats-ds2:/dev/dsk$ sudo format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@0,0 1. c7t1d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@1,0 2. c7t2d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@2,0 3. c7t3d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@3,0 4. c7t4d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@4,0 5. c7t5d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@5,0

    Read the article

  • What is the max supported number of SATA devices (using cable adapters) on a Dell SAS 6/iR adapter?

    - by Zac B
    I've got a Dell SAS 6/iR PCI-E adapter. I don't have a multiplier backplane. I'm planning on connecting SATA (non SAS) drives. If I buy cable adapters only (ones that split a SAS connector on the card to a certain number of SATA cables), how many drives can I connect to this card? The way I see it, there are two limitations: a limitation imposed by the theoretical max number of devices supported on the card (which I've dug through the specs to find, but haven't seen yet), and a limitation imposed by the number of SAS plugs on the card multiplied by the number of SATA cables that come out of the highest-multiplying splitter I can buy. The answer to my question would be the minimum of those two limitations. I've seen 4x SATA coming out of some splitters; are there any that have more? Alternatively, if this is an RTFM question, does anyone have a good link to a "this is how SAS works, this is how you figure out the max number of devices, and this is how the concepts of 'ports', 'lanes', 'endpoint devices', and 'connectors' all relate in SAS-land" document? I've looked around on the Dell docs, but haven't found anything that explains this to someone at my level of understanding of SAN/enterprise storage technologies. Cheers!

    Read the article

  • Firefox window disappears

    - by Lord Torgamus
    Now this is odd. At some point in the last half hour, my Firefox window disappeared. I didn't notice, as I was working in another program at the time. No Firefox icon shows up with Alt-Tab, and no Firefox listing shows up under the Applications tab in the task manager. There is a Firefox entry under the Processes tab. Normally, I probably wouldn't have noticed, just opened Firefox up again, but I'm listening to an Internet radio station and the stream never stopped. When I did open a new Firefox window, it showed up in the Task Manager's applications tab. I'm running Windows XP, and my Firefox has the add-ons Adblock Plus, BetterPrivacy, Cert Viewer Plus, DOM Inspector, Firebug, Greasemonkey, Java Quick Starter, Live HTTP headers, Microsoft .NET Framework Assistant, NoScript, WebDeveloper and XPather. The radio station is Slacker; it's never given me any trouble before, and I've been using it for months. I don't think there was anything unusual in my open tabs; just a few static pages at non-sketchy sites like Java APIs, plus GMail and the aforementioned Slacker. Googling brought up a handful of similar-but-not-quite-the-same errors, none of which had useful resolutions. Does anyone know how to bring that window back and/or prevent this from happening again?

    Read the article

  • _default_ VirtualHost overlap on port 443, the first has precedence

    - by Mohit Jain
    I have two ruby on rails 3 applications running on same server, (ubuntu 10.04), both with SSL. Here is my apache config file: <VirtualHost *:80> ServerName example1.com DocumentRoot /home/me/example1/production/current/public </VirtualHost> <VirtualHost *:443> ServerName example1.com DocumentRoot /home/me/example1/production/current/public SSLEngine on SSLCertificateFile /home/me/example1/production/shared/example1.crt SSLCertificateKeyFile /home/me/example1/production/shared/example1.key SSLCertificateChainFile /home/me/example1/production/shared/gd_bundle.crt SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM </VirtualHost> <VirtualHost *:80> ServerName example2.com DocumentRoot /home/me/example2/production/current/public </VirtualHost> <VirtualHost *:443> ServerName example2.com DocumentRoot /home/me/example2/production/current/public SSLEngine on SSLCertificateFile /home/me/example2/production/shared/iwanto.crt SSLCertificateKeyFile /home/me/example2/production/shared/iwanto.key SSLCertificateChainFile /home/me/example2/production/shared/gd_bundle.crt SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM </VirtualHost> Whats the issue: On restarting my server it gives me some output like this: * Restarting web server apache2 [Sun Jun 17 17:57:49 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence ... waiting [Sun Jun 17 17:57:50 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence On googling why this issue is coming I got something like this: You cannot use name based virtual hosts with SSL because the SSL handshake (when the browser accepts the secure Web server's certificate) occurs before the HTTP request, which identifies the appropriate name based virtual host. If you plan to use name-based virtual hosts, remember that they only work with your non-secure Web server. But not able to figure out how to run two ssl application on same server. Can any one help me?

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • Log and debug/decrypt an windows application's HTTPS traffic

    - by cweiske
    I've got a proprietary windows-only application that uses HTTPS to speak with a (also proprietary, undocumented) web service. To ultimately be able to use the web service's functionality on my linux machines, I want to reverse-engineer the web service API by analyzing the requests sent by the application. Now the question: How can I decrypt and log the HTTPS traffic? I know of several solutions which don't apply in my case: Fiddler is a man-in-the-middle HTTPS proxy which I cannot use since the application doesn't support proxies. Also, I do not (yet) know if it works with self-signed server certificates, which I doubt. Wireshark is able to decrypt SSL streams if you have the server's private certificate, which I don't have. any browser extension since the application is not a browser If I remember correctly, there have been some trojans that capture online banking information by hooking into/replacing the window's crypto API. Since the machine is mine, low level changes are possible. Maybe there is a non-trojan (white-hat) network log application out there which does the same? There is a blackhat presentation with some details available to read. They refer to Microsoft Research Detour for easy API hooking.

    Read the article

  • Nginx terminate SSL for wordpress

    - by Mike
    I have a bit of a problem. We run a wordpress blog behind a ngnix proxy and looking to terminate the ssl on the nginx side. Our current nginx config is upstream admin_nossl { server 192.168.100.36:80; } server { listen 192.168.71.178:443; server_name host.domain.com; ssl on; ssl_certificate /etc/nginx/wild.domain.com.crt; ssl_certificate_key /etc/nginx/wild.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; location / { proxy_read_timeout 2000; proxy_next_upstream error; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://admin_nossl; break; It just does not seem to work. If I can hit https://host.domain.com but it quickly switches back to non-secured from what I can see. Any pointers?

    Read the article

  • Cloned Win7: Keyboard doesn't work

    - by Marc
    I cloned my old Windows7 hard disk to a shiny new Seagate Momentus XT 500GB using the free EaseUs Disk Copy tool on my laptop. After the clone process I used the Windows 7 installation disc to start the automatic startup repair. This took maybe 15 minutes and then my cloned disk was able to start. Now the cloned disk boots until the login screen and then I can't do anything because my keyboard just doesn't work. I tried connecting an external USB keyboard but this didn't help. The mouse is working fine. Note that the keyboard works fine in BIOS and in the Windows startup options menu. I booted into safe mode and again the keyboard is not working at all. I also noticed that the letters "Press CTRL+ALT+Delete to login" are now shown in italic font but they used to be shown non-italic on the original disk. I have now replaced the clone with the original disk again and from here everything works fine. Doesn't anybody have an idea how I can get my keyboard back?

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

  • How to reinstall bootloader after migration to SSD

    - by hijarian
    I must say, it was difficult to name this question. Basically, I need to properly reinstall the bootloader on my system, because I already have the working system disks for my OSes. The long story is this: I had the large slow HDD with Windows7 & Debian Wheezy dual-boot on it, perfectly bootable. Then, I ordered the SSD drive and prepared my system partitions to fit onto the much smaller SSD. I wanted the following schema: 128 GB Windows 24 GB / on Debian 86 GB /home on Debian Strange size for /home because there's no such thing as true 256GB disk drive. So, I've prepared such a partitions on my initial HDD and installed the new SSD and then I loaded the GParted live USB (can't remember now how it was really named), and then just copypasted the partitions from HDD to SSD. So, now I have the following partitions across the physical disks: SSD 128 GB copy of original Windows partition 24 GB copy of presumably Debian / 86 GB copy of presumably Debian /home HDD 128 GB Windows 24 GB / on Debian 86 GB /home on Debian ... several other partitions with non-system data ... And the behavior of the system right after the Ctrl+C, Ctrl+V in GParted was as follows: no GRUB, system boots right into the Windows on HDD. In BIOS settings are to boot from SSD first. I managed to create the Debian Testing installation USB and loaded it into the rescue mode, found that it identified my SSD as /dev/sda and installed the GRUB to the /dev/sda. Now my system loads the GRUB which lists both Windows and Debian. From HDD. So, I am now back into initial position. Please, how I should set up the GRUB so it'll load the OSes correctly from SSD? Should I fire up my Debian, fiddle with the GRUB's config and reinstall it again to the same place (at SSD)?

    Read the article

  • Connecting to unsecured wireless network

    - by Sanchez
    I would like to know what information is public and can be intercepted in a non-open, but unsecured wireless network. Moreover, is there anything I can do to make it more "secure", other than using https connection whenever possible. In more details, I recently discovered (with surprise) that the wireless network in my school is actually unsecured. Although not everyone can connect to it (you need a student ID), I am told that certain softwares like Wireshark would be able to intercept the data. Since I have been using the network for all private purposes (email, facebook etc), I do feel quite insecure now and would like to understand the situation a bit better. I installed Wireshark and tried to play with it but all I can see are something alien to me. In any case, all I see seems to come directly/indirectly from my IP address, and I have long thought that usually different computers in the same wireless network would be assigned different addresses. Am I wrong? If not, then I feel very confused about what information is actually being captured (potentially by other users in the network, since I don't think I could capture activities of others in the same network anyway), and whether it's safe to use the network at all. (Gambling on others in the same network showing good behaviour is apparently not an option.) Thank you.

    Read the article

  • Openfire: Granular alerts

    - by R.S.
    Our organization has had an Openfire server up and running for about a year now. So far we have used it for messaging in the I.T. Dept and Alerts to all users. We hit a snag this week when one system went down and several notifications were sent out to inform users of progress. Some of the users were Radiologists that do not use the particular system in question and these users found it more of an annoyance than informative. Since that I have been tasked with finding a more granular system for alerts. I am confident that Openfire can handle this and I have just about settled on a way of getting this to work. My idea is to create a half dozen or so users. For example: Staff, Doctor, Assitant and Supervisor. Using spark as our messenger has worked great so far so I would like to stick with that if possible. With that in mind, under advanced login features the resource name can be changed to something unique and non-unique users can log in under the same account, however, when a message is sent to one of these users, the message delivery is inconsistent. Currently I have 4 users under the Assistant user and it seems only 1 of the users receives the messages. Is this scenario even possible? I am avoiding working with the groups in Openfire because the function is atrocious. I could possibly integrate the system into our Active directory but I don’t think that will get us to a workable solution any quicker or more efficiently.

    Read the article

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • FTP Server with advanced features

    - by Nikolas Sakic
    Hi, We supply zone-files to our customers. Some zone files are big about 300MB and some are quite small, maybe like 1MB. We had this issue that someone setup a script to continually download the file. Imagine downloading 300MB file a few hundred times a day. Since, we don't have packet-shaper to throttle the traffic, we need to upgrade ftp server and use add-on modules to limit the download somehow. We currently use proftpd server. Also note that there are different users for different domains - say, if you want to download zone file for .INFO domain, then you use a particular user. That user can't download any other zone's file. This is what we are looking for: Have maximum of 400MB download per user per day. Or even have different download limit for different users per day. Have one connection per user at any time. Max # of connection (non-simultaneous) per user per day is 5. Anyone trying to exceed that gets banned for 24 hours. Has anyone used FTP server with similar restrictions above? Does anyone have any ideas where I can start? Any help would be appreciated. Thanks. -N

    Read the article

  • How to make a Windows Vista boot / recovery cd from a running system without using an original CD / DVD

    - by Giorgio
    I have just repaired a friends computer (replaced motherboard) and now I am trying to repair the Windows (Windows Vista) partitions. Unfortunately, probably due to the fact that he tried to start it several times after the old motherboard had stopped working (no signal on video) now the partition table or the file systems (or both?) appear to be damaged. I managed to boot Windows a couple of times but could not complete the boot. I tried to repair the partition table and file systems using Linux RIP (booting from USB stick) but the Linux utilities say that the file system is damaged and I should run chkdsk /f from Windows. So I now need a Windows boot CD from which I can boot and run chkdsk or any other Windows utilities that can repair the file system. Is there an easy way to create such a CD? Or can I download it for free somewhere? All the links to Windows Vista boot / repair CD's I have found on the internet refer to non-free stuff. Any hint? EDIT I have a working laptop with Windows Vista installed. So one solution would be to make a bootable CD or USB from it so that I can boot the desktop and run the repair utilities. However, I do not have the Windows Vista installation DVD, because both computers were bought with Windows pre-installed.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >