Search Results

Search found 15591 results on 624 pages for 'problems'.

Page 461/624 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • User receives group membership error to terminal server even though has rights

    - by BlueToast
    http://www.hlrse.net/Qwerty/TSLoginMembership.png To log on to this remote computer, you must be granted the Allow log on through Terminal Services right. By default, members of the Remote Desktop Users group have this right. If you are not a member of the Remote Desktop Users group or another group that has this right, or if the Remote Desktop User group does not have this right, you must be granted this right manually. Only as of today a particular user began receiving this message for a second terminal server they use; otherwise, they have never had any problems authenticating into this server. We have no restrictions on simultaneous and multiple logins. On each terminal server, we have a group and security group like "_Users" locally in the Builtin\Remote Desktop Users group. For this particular user, on this particular terminal server we have locally given him Administrator, Remote Desktop Users, and Users membership; in AD we have given him DOMAIN\Administrator, Builtin\Remote Desktop Users, DOMAIN\_Users. It still gives us that error message. We gave him membership to another terminal server (random) by simply making him member of another DOMAIN\_Users group -- successfully able to login to that random terminal server. So, from scratch we created an AD account 'dummy' (username) with only Domain Users membership. Tried to login to this particular server, no success. So I added 'dummy' to DOMAIN\_Users group, and then was successfully able to login. Other users from this user's department are able to login to this particular server just fine as well. We checked the Security logs on this particular server, and while it is logging everything, the only thing it appears to not log are these failed login attempts from this particular user who receives this error message. We have tried rebooting the server, and the user is still receiving that error message.

    Read the article

  • NVidia TwinView - slow rendering on dual desktop [closed]

    - by lisak
    Hey, does anybody have experience with it ? I've set it up 4 times on 4 different machines. And there was always problems with slow rendering ( for instance : scrolling pages in browser is not fluent). But there always was something that finally made it work perfectly... I remember that one time this option helped, but not now Option "RenderAccel" "1" Nvidia geforce 8400GS or Zotac geforce 9500GT Monitors connected via dvi and hdmi connectors proper nvidia driver installed Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" Option "Xinerama" "0" EndSection Section "Files" ModulePath "/usr/lib64/xorg/modules" FontPath "/usr/share/fonts/local" FontPath "/usr/share/fonts/TTF" FontPath "/usr/share/fonts/OTF" FontPath "/usr/share/fonts/Type1" FontPath "/usr/share/fonts/misc" FontPath "/usr/share/fonts/CID" FontPath "/usr/share/fonts/75dpi/:unscaled" FontPath "/usr/share/fonts/100dpi/:unscaled" FontPath "/usr/share/fonts/75dpi" FontPath "/usr/share/fonts/100dpi" FontPath "/usr/share/fonts/cyrillic" EndSection Section "Module" Load "dri2" Load "glx" Load "extmod" Load "record" Load "dbe" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Acer AL1715" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 75.0 EndSection Section "Device" Identifier "Nvidia" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "MSI big bang-fuzion" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8400 GS" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "RenderAccel" "1" Option "AllowGLXWithComposite" "1" Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "CRT: 1280x1024 +1920+0, DFP: 1920x1080 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • bond0:0 + define virtual IP

    - by yael
    in my Linux server I have the following: Linux Version - RedHat-Linux- 5.3.0.0 (this Linux server only only one LAN) more /etc/sysconfig/network-scripts/ifcfg-bond0:0 DEVICE=bond0:0 ONBOOT=yes BOOTPROTO=static IPADDR=10.10.10.12 NETMASK=255.255.255.0 ifconfig -a bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) bond0:0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:10.10.10.12 Bcast:1.1.1.255 Mask:255.255.255.0 UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr 00:0E:0C:C7:F8:92 inet addr:1.1.1.1 Bcast:1.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::20e:cff:fec7:f892/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8600 errors:0 dropped:0 overruns:0 frame:0 TX packets:4764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:717979 (701.1 KiB) TX bytes:598620 (584.5 KiB) Memory:b8820000-b8840000 my problems: why I get HWaddr 00:00:00:00:00:00 and not the real MAC address I cant ping to other server with 10.10.10.11 from my server is it possible to define bond0:0 when I have only one LAN (eth0) other info: more /etc/modprobe.conf alias eth0 e1000e alias eth1 e1000e alias eth2 e1000e alias eth3 e1000e alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptsas alias scsi_hostadapter2 ata_piix alias bond0 bonding alias bond1 bonding

    Read the article

  • Unrelated Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Cannot terminate process, "already terminated"

    - by felix-freiberger
    On Windows 8, I regularly get processes into a state where I can't terminate them. Skypekit.exe seems to be the process that's most likely to trigger that issue, but other processes can do that, too. When I try to terminate these processes, I sometimes get an "access denied" message, sometimes nothing happens - but every following attempt to kill that process results in an "access denied" message, too, even though I... have administrative rights (and ran the task manager with it) own that process have the right to terminate it "Process Hacker 2" shows a more detailed error message, stating that I couldn't terminate the process because it already is terminated. Still, the process is most definitely still there, because every task manager I tested still can see it. Process Hacker's "Terminator" is unable to kill such a process, but when running the "Close the process' handles" tactic, Process Hacker gets stuck himself, leaving its windows in "not responding". In that state, other task managers are in turn unable to kill Process Hacker. The only way I found to actually end these processes is to shutdown (which works without any problems). Why is this happening? How can I kill these processes?

    Read the article

  • What differences are there between "home" switches and "professional" switches?

    - by pjreddie
    Our radio station uses a PtP wireless system to stream our radio and TV signals from our studio up a hill to our transmitter. We have been having problems with warbly sound and drop outs that come from some point in this system. An engineer that occasionally visits the station thinks it could be the switches we use on each side of the PtP wireless system to connect the PtP devices to the encoders and decoders and wants us to get two of these switches: http://www.amazon.com/Netgear-JGS516-ProSafe-16-Port-Ethernet/dp/B0002CWPOK/ref=dp_return_1 The encoder/decoder setup only streams 8Mbps total so it seems like the switches we have should not be stressed out, unless they are causing sufficient latency to degrade the performance of the encoder/decoder. At each end of the connection we only have 4 connections, is there any reason we couldn't get a cheaper, "home" quality switch like this: http://www.amazon.com/D-Link-DGS-1005G-5-Port-Gigabit-Desktop/dp/tech-data/B003X7TRWE/ref=de_a_smtd Is there a significant difference that we would notice in terms of latency between these two switches? How much does the quality of the switch actually matter in this scenario? Any help is appreciated, feel free to ask questions if anything needs clarification. Thanks

    Read the article

  • Repairing Windows 7 boot after Ubuntu 10.10 install

    - by Ted
    I've read various threads on this after googling, including one on this site. I had Windows 7 installed on an SSD. I wanted to try Ubuntu so I created a partition for it on the SSD and booted with the live CD to install Ubuntu. Went through the install and somehow Ubuntu carved out another partition on the SSD rather than using the one I had already created. Windows 7 would then not boot but Ubuntu would. I booted with my 7 cd and ran the automatic startup repair, it didn't find any problems. I then ran the bootsect command on the drive with 7. It said it repaired the bootmgr but Windows still would not boot and now Ubuntu won't either. I read somewhere else that it may be due to the partition that 7 was on being changed during the install. I don't care about the Ubuntu installation but I don't want to lose the 7 install, can I delete the ubuntu partition through booting with the 7 cd? Will that do any good? Thank you all! I'm stumped even though I've done startup repairs before, just not after Ubuntu install.

    Read the article

  • Certain websites redirect to 127.0.0.1. How do I fix this?

    - by Dian
    Facebook and Youtube in particular. Tried nslookup the address shows as 127.0.0.1. Checked the HOSTS file, it's fine. Ran Malwarebytes' Anti-Malware (didn't find any problems) and SpyBot Search and Destroy (found 1 problem). (Not sure if the Spybot made this improvement) now pinging youtube shows the correct address (74.125.71.91) but the browser still says: Connection to 127.0.0.1 Failed The system returned: (111) Connection refused Tried ipconfig /flushdns but there are no changes. Switched to another user but the results are the same. hosts file: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost ipconfig all: Connection-specific DNS Suffix: DNS Servers: 10.1.1.30 208.67.220.220

    Read the article

  • System With Two Network Adapters [closed]

    - by Synetech inc.
    Hi, My system has a NIC (Marvell Yukon) built-into the motherboard, but I also have a D-Link (RealTek) card. I figure that using the D-Link and disabling the Marvell makes the most sense, though I'm wondering if maybe the built-in one has better throughput (not that my Internet connection is so fast). Also, I'm wondering about the merits of using both at the same time. My router has four ports and I have experimented with enabling and plugging both NICs into the router. I was able to connect to the Internet, but the pattern of usage seemed irregular (which adapter was chosen for the transfer and any given point). I also considered bridging the two, but am having difficulty in finding out what exactly creating network bridge does in the context of the Windows Network Connections window. I am familiar with the concept of connecting networks, so it seems to me that birding two connections on the same segment is pointless at best (and can cause problems like loops?) Does anyone have any tips on what to do if a system has more than one NIC and any clarification on the bridge option? Thanks a lot.

    Read the article

  • RAID5 issue after replacing motherboard and upgrading firmware

    - by 8steve8
    ok so ive had a 4x2TB(samsung HD204UI w/firmware patch) raid5 array working normally for about a month. It was in a h57 gigabyte motherboard using the intel raid with Windows 7 x64. Today I got an intel h67 motherboard, so I upgraded the intel raid drivers to 10.1.0.1008 from 9.6.0.1014, and I'm not sure if i checked after a reboot, but it caused no problems. I swapped in the new dh67 motherboard, and my array status was "failed". 2 of the 4 drives listed themselves as members, while the other two drives listed themselves as non-members. I tried going back to the old h57 mobo, and downgrading the raid drivers, but the issue remains. It's not port dependent, 2 of the drives always come up as non-members regardless of what port or motherboard they are plugged into. This screenshot should show that the SNs match, which raises the question why the software doesn't realize the drive is a member of the array. I'd like to know if anyone has experienced anything similar, and what should I do, can I force the drive to be recognized as a member (without wiping data)?

    Read the article

  • What kind of hosting do I need? [closed]

    - by Robert Smith
    I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have custom made forum (rather than a built-in forum like the ones you can find as plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. Thanks a lot.

    Read the article

  • Uninstall Glassfish and metro completely

    - by user775829
    I thought of updating my Glassfish server from 2.1 to 3.1.1 in a Linux machine. I downloaded the .ZIP package. However during uninstalling of Glassfish v2.1 I did not find the uninstall.sh file in "bin" directory. Following are a few steps which I did... I removed the glassfish folder (rm -rf ...) After removing files in the end it gave me a notification that it could not remove 2 files used by Metro. I cant recollect those file names, but I manually deleted that folder. I made a mistake by first not uninstalling Metro. I uninstalled metro completely after that. but it seemed pointless (it uninstalled successfully :P ) I transfered the Glassfish 3.1.1 ZIP file and unzipped and configured it. FOllow are a few Problems I am facing I cannot deploy any of my WAR file. Its giving errors saying " Error creating bean,Instantiation of bean failed etc etc." (However the WAR file is getting deployed successfully in other Linux Machine) When I try installing Metro v2.1 separately, it does not show the admin console or it timesout while starting the domain. The Log File of the Domain says it has started the domain successfully and the process is also created. But after running the command (asadmin) it takes like forever and times out without showing Domain Started Successfully, There is no uninstall.sh in Glassfishv3.1.1 bin directory. How do I completely uninstall Glassfish v 3.1.1 and Metro 2.1 ??? What are the files which I will have to manually remove?

    Read the article

  • Dovecot starting and running, but not listening on any port

    - by Dženis Macanovic
    Among others things I'm in charge of a Debian GNU/Linux (Wheezy) DomU for the mail services of the company i work for. Yesterday one HDD that was used for this particular server has died. After installing Debian again, Dovecot decided to no longer listen on any ports (checked with netstat -l). Other services (like Postfix and MySQL) work without problems. dovecot -n: # 2.1.7: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-3-amd64 x86_64 Debian wheezy/sid ext3 auth_mechanisms = plain login disable_plaintext_auth = no first_valid_uid = 150 last_valid_uid = 150 mail_gid = mail mail_location = maildir:/var/vmail/%d/%n mail_uid = vmail namespace inbox { inbox = yes location = prefix = } pass db { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0666 user = vmail } } service imap-login { inet_listener imaps { port = 993 ssl = yes } } service pop3-login { inet_listener pop3s { port = 995 ssl = yes } } ssl_cert = </etc/ssl/private/mail.crt ssl_key = </etc/ssl/private/mail.key userdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocol imap { mail_max_userip_connections = 25 } UID 150 is vmail (I double checked file permissions). I didn't install Dovecot from source, but via apt from the official Debian US mirror. There are no messages concerning Dovecot in /var/log/syslog except for: Oct 21 06:36:29 server dovecot: master: Dovecot v2.1.7 starting up (core dumps disabled) Any ideas?

    Read the article

  • Eclipse Indigo freezes on 'Open Type' search

    - by NickGreen
    When I'm trying to search for a Java class with Ctrl-shift-T (Open Type popup), Eclipse freezes when I'm typing 1 character. It usually takes about 8 seconds to 'unfreeze', but sometimes it won't come back at all.. When it freezes, I see that the eclipse process takes about 1Gig of mem and the CPU is about 100%! I've tried creating a new workspace, adjusting the eclipse.ini (perm size, different memory values), starting with -clean and at last reinstall the whole IDE. Nothing helps.. My eclipse.ini: -startup plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.100.v20110505 -product org.eclipse.epp.package.jee.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 768m --launcher.defaultAction openFile -vmargs -server -Dosgi.requiredJavaVersion=1.5 -Xmn128m -Xms1024m -Xmx1024m -Xss2m -XX:PermSize=128m -XX:MaxPermSize=128m -XX:+UseParallelGC -Djava.library.path=/usr/lib/jni I'm using the following plugins: JRebel and m2e. I'm desperate for a solution because this problems causes me a great deal of time loss. System: Ubuntu 12.04 LTS 64 bit, 4GB mem, Intel core i7 860 @ 2.8 Ghz. Hope somebody knows a solution. Thank you for your time.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • How can I make my Super keys (Windows Key) behave more like Ctrl/Alt/Shift in Linux

    - by deltaray
    After using the Ctrl + "arrow keys" for 13 years to switch virtual desktops in X windows, I've been convinced recently to change to using the Super keys instead (the windows key and the context menu key, which I've remapped). This all works fine for the most part. However, something is still picking up the key events that these keys are sending as if they are a normal alphanumeric like key. For example, I first noticed this in Google Docs spreadsheet that if I press the windows key alone over top of a cell, that it starts editing that cell. It doesn't insert anything, it just sends a key event that Firefox sees and starts editing the cell. This caused problems on a collaborative document I was working on as the way Google docs works, it led to me accidentally erasing the data in a few fields before I realised what was going on. I like using the super keys, but I want them to behave more like a Ctrl or Alt key does in that its a modifier key and doesn't send anything until a second key is pressed. My setup is the following: Ubuntu 10.10 XFCE 4 Microsoft Natural Ergo 4000 keyboard (with the logo scratched out) The following is my .Xmodmap file: remove Lock = Caps_Lock keycode 66 = Escape ! The below maps my other windows context menu key. keycode 135 = Super_R Edit: As requested, here is the relevant output from xev for a keypress and keyrelease of my Super_L (left windows key) KeyPress event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849342, (177,174), root:(182,228), state 0x10, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849430, (177,174), root:(182,228), state 0x50, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Why is my Toshiba Satellite L675D laptop not connecting to my HD TV using an HDMI cord?

    - by Akasha Eyre
    I'm having a problem connecting my Toshiba Satellite L675 laptop to my Samsung HD TV. I've done it before using an HDMI cord but now, for some odd reason, it's not connecting at all. My computer screen before would go completely black for a second & then come back & (with it being attached to the TV with the HDMI cord & it being on the proper channel) it would make a sound, come on & work beautifully. Now, I try hooking it up & it doesn't do anything. No black screen, no sound. The TV screen just says there's no connection & that I need to check out the power source. I'm still checking around, trying to find out if there's a virus in my computer (since my dad & I have the same computer & he can connect his to the TV with no problems), or if I deleted something that had to do with the connection. I've tried turning my computer off & letting it sit for a while, I've tried checking out other websites & have come up with nothing.

    Read the article

  • Finding bluetooth link key in Win7, to double pair a device on dualboot computer

    - by Ilari Kajaste
    How can I dig up the bluetooth link key for a paired device in Win7? Is this something that is dependent on the bluetooth stack I'm using (Toshiba), or is there a generic place to store these in Win7? Note: I'm not talking about the six-digit code usually typed by the user during pairing - that is worthless since it's discarded after pairing process. What I mean is the 128-bit link key that the devices exchange during pairing, and use thereafter to encrypt all their bluetooth traffic. Background: I dualboot Win7 / Ubuntu on my laptop, and I would like to have my phone paired to both OS's. Since the dualbooting computer has only one bluetooth adapter and thus only one bluetooth address, I cannot do two pairings to the phone, since on the second pairing (windows) the phone just replaces the previous pairing (linux) to the same bluetooth address. A thread on Ubuntu forums pointed me to what I have to do - pair first on linux, then on windows, and then replace the link key on linux side with the one windows negotiated. I can find the linux side pairing key from /var/lib/bluetooth/[BD_ADDR]/linkkeys - no problems there. However, on windows side I can't find the key. According to the forum post, on windows side the key should be in SYSTEM\ControlSet002\services\BTHPORT\Parameters\Keys\[BD_ADDR] but while that registry key does exist, it has no subkeys. (And a similar registry path in ControlSet001 didn't have any subkeys either.) One thing I've been instructed to do is to capture all events during pairing with Sysinternals Process Monitor. I did this, but I haven't been able to find any useful information from the captured events, not even by exporting the data to a huge XML and grepping that with the BD_ADDRs (with or without colons). So how could I find the link key for a paired device in Win7? Some reference information: Wikipedia: Bluetooth, Security Now: Bluetooth security

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • NTbackup doesn't complete on system state

    - by Joe Majsterski
    I have a Windows 2003 server that is running a semi-custom backup task. The scheduled task calls NTbackup with a few switches depending on whether it is a full or incremental backup. Most of the time, the NTbackup completes fine, and the wrapper then appends the NTbackup log into its own log before adding a few final comments and completing. The problem I am having is that sometimes, NTbackup seems to just... blank out. It always completes backup of the C: and E: drives, but then it will start the system state and not add any more messages into the event log saying it completed that. And the NTbackup log is left empty, since it doesn't write anything to the log until all the backup tasks are complete. This is causing the wrapper to append no text into its own log. That causes problems for us because we read the information out of that log to determine whether backups are failing. The wrapper task also reports that it is completing normally in the event log. Anyone ever seen a case where system state doesn't complete consistently? To be clear, the server is not logging any error messages anywhere. It's just not seeming to complete or log anything.

    Read the article

  • On Ubuntu get: "-bash: ./flume No such file or directory" BUT flume is there and executable. Same binary OK on RHEL

    - by lcbrevard
    This is already posted in serverfault - and may be more apprpriate there. Reworked a bit from the orginal posting. We have a product built on CentOS 4 32-bit Linux that runs unmodified on 32- and 64-bit CentOS/RHEL 4 and 5 and SLES 10. It also runs unmodified on SLES 9 64-bit. [SLES 9 32-bit requires a different libstdc++.] The name of the main binary executable is 'flume' Yesterday we tried to put this on 64-bit Ubuntu 10 and, even though the file is there and the right size, we get: -bash: ./flume: No such file or directory 'file flume' shows it to be a 32-bit ELF (can't remember the exact output and the system is on an isolated network) If put into /usr/local/bin, then 'which flume' returns: /usr/local/bin/flume The file is marked as executable (did 'chmod +x flume') and lsattr shows no problems with attribute bits. I was not able to try 'ldd flume' yet. I have also not tried 'strace flume'. Currently I am with an air conditioning failure. [It's been that kind of week!] I now suspect that some library is not there. This is a profoundly unhelpful message and one I have never seen before. Is this peculiar to Ubuntu or perhaps just to this installation. We gave up and moved to a RHEL 4 system and everything is fine. But I sure would like to know what causes this.

    Read the article

  • Single Sign On 802.1x Wireless - saying “Connecting to <SSID>”, hangs for 10 seconds, fails with “Unable to connect to <SSID>, Logging on…”.

    - by Phaedrus
    We are implementing WiFi on Windows 7 machines in our corporate environment. Machines should be able to log into the domain by WiFi as the Machine (Pre-Logon), and as the User (Post-Logon). We have everything working correctly except for 2 things: 1) Sometimes the login scripts don't run 2) The user VLAN is sometimes different than the machine vlan, and no DHCP renew occurs after user logon. I am clear that both these problems should be fixable by using the "Single Sign On" Option under the 802.1x Wireless Vista GPO, and setting the wireless to connect immediately before user logon and also by enabling "This network uses different VLAN for authentication with machine and user credentials" If I enable these GPO settings in a lab, the computer does authenticate & gets WIFI before the user logs on, so when the login box is displayed, it says “Windows will try to connect to ”, even though it is already connected (which should be ok?). Enter the user credentials and it goes to a screen saying “Connecting to ”, hangs for 10 seconds, fails with “Unable to connect to , Logging on…”. Desktop fires up and then the user re-authenticates with no problem as himself instead of the machine, but by that point, we defeat the point of the WiFi SSO “before user logon”. Also by that point, no DHCP renew seems to occur, and the user is still stuck with the wrong IP address for the new VLAN. When the “Connecting to ” screen comes up, there’s no indication on the AP or the Radius server that anything whatsoever is happening after credentials are entered until after the domain logon. Also with this policy enabled, sometimes windows hangs on a black screen indefinitely until I disable the Wireless NIC, so something is knackered for sure. What have I missed? Suggestions are much appreciated... /P

    Read the article

  • Is domain-transfer inherently safe for downtime when the name servers remain the same?

    - by jlmt
    I've been reading around this topic towards understanding whether there's some or no chance of downtime during an upcoming domain transfer for 15 live and very critical domains. In our case there are three companies involved: CompanyA is the original registrar and DNS host, CompanyB is the new DNS host, and CompanyC is the new registrar. I've already changed the nameservers for all domains to those of CompanyB. We suffered some downtime because CompanyA deleted their hosted DNS for our domains directly after the change, but the changes propagated and we're now able to configure our DNS with CompanyB. From what I understand (please correct where wrong!): There exists an SOA record that points oneofourdomains.com to ns.companyb.com. That record is maintained and authoritatively hosted by the ccTLD registry for the domain (eg. Verisign for .com). CompanyA currently has the ability to change the SOA record because they're the registrar. There exist NS records for oneofourdomains.com, which are also related to the link from domain name to nameserver, are similarly hosted by the ccTLD, and which CompanyA are also able to change while acting as registrar. Neither CompanyB nor CompanyC currently have any control over the SOA or NS records. CompanyA are unable to cause us (DNS) problems during the transfer by dropping service early, because they are not the authoritative source for the SOA and NS records. When we transfer the domains, it's administrative control of the SOA and NS records that will be transferred to CompanyC. As long as we advise CompanyC that the SOA and NS records must not change (as regards pointing to CompanyB's nameservers), there's no need for any kind of DNS change, and therefore no possibility of downtime. Is my understanding of this correct? My fear is that CompanyA will somehow cut us off again, and their support dept hasn't given me much confidence in their understanding of the topic.

    Read the article

  • USB Device Not Recognized (Mac)

    - by Nargis
    Fortunately, my Mac-pro also made one of my USB storage devices inoperable. My data loss in that USB device but such as another USB device and USB keyboard are unaffected. I have heard that my friend usually trigger this problem by having at least two devices plugged in - typically thumb drives/USB flash drives, and then once a second flash drive is plugged in that become unrecognized. I have only two USB ports and first I think port loose when I connect two USB devices. But later I found these hidden files (“.Spotlight-V100”, “.TemporaryItems”, “.Trashes”, and “._.Trashes”) are created by Mac OS. And before unrecognized that USB device I have deleted these files and my friend had also done the same action. Now I don’t want to test for next USB device to become unrecognized and I won’t deleted any hidden system file inside the flash drives. But I really want to know why these problems happened. Can I delete these hidden files when I only connect to virtual machine (Vista), because I used to delete all useless hidden files from USB flash drives? Any suggestions or thoughts to prevent this or alternative suggestions to fix the problem that take lossless would be much appreciated.

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >