Search Results

Search found 5057 results on 203 pages for 'force'.

Page 152/203 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Domain Trust 2008 to 2003

    - by nick3216
    I'm having trouble setting up the trust relationship between a Windows Server 2003 and a Windows Server 2008 AD. Domain a is Windows Server 2003 Forest functional level. Domain b is a Windows Server 2008 Forest functional level. I can set up the incoming side of the trust relationship on domain "a" so that it trusts domain "b". Try as I might on domain "b" I can't set up the outgoing side of the trust relationship to domain "a". The GUI interface gives an unhelpful 'The request is not supported'. I'm not sure netdom is being more or less helpful as it refers me to FilterSIDs netdom trust /add b /uo:b\admin /po:* /d:a /ud:a\admin /pd:* /oneside:trusting To improve the security of this external trust, security identifier (SID) filtering is enabled, however, if users have been migrated to the trusted domain and their SID histories have been preserved, you may choose to turn off this feature. For more information about SID filtering and how to turn it off, see the help for netdom trust /FilterSids or see Help and Support. The request is not supported. The command failed to complete succesfully. I say 'less helpful' because Windows Server 2008 doesn't support the /FilterSIDs option. How can we force creation of this trust? Edit: Just to clarify I've checked that the [Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options] "Network access: Allow anonymous SID/Name translation” is enabled on both sides of the trust as per http://social.technet.microsoft.com/Forums/en/winserverDS/thread/cc61fc25-3569-4413-bbfd-92390eb31118

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • Is there a faster way to change default apps associated with file types on OS X?

    - by Lri
    Is there anything more convenient than using RCDefaultApp or Magic Launch, or just repeatedly pressing the Change All buttons in Finder's information panels? I thought about writing a shell script that would modify the CFBundleDocumentTypes arrays in Info.plist files. But each app has multiple keys (sometimes an icon) that would need to be changed. lsregister can't be used to make specific modifications to the Launch Services database. $ `locate lsregister` -h lsregister: [OPTIONS] [ <path>... ] [ -apps <domain>[,domain]... ] [ -libs <domain>[,domain]... ] [ -all <domain>[,domain]... ] Paths are searched for applications to register with the Launch Service database. Valid domains are "system", "local", "network" and "user". Domains can also be specified using only the first letter. -kill Reset the Launch Services database before doing anything else -seed If database isn't seeded, scan default locations for applications and libraries to register -lint Print information about plist errors while registering bundles -convert Register apps found in older LS database files -lazy n Sleep for n seconds before registering/scanning -r Recursive directory scan, do not recurse into packages or invisible directories -R Recursive directory scan, descending into packages and invisible directories -f force-update registration even if mod date is unchanged -u unregister instead of register -v Display progress information -dump Display full database contents after registration -h Display this help

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • Known USB 2.0 devices don't install driver, but must be manually forced

    - by Darragh
    When a known USB 2.0 device is plugged in and detected, it doesn't install the driver correctly but shows a Code 28 error and lists the device under "Other Devices" in Device Manager. When view properties of this device , it shows the following status; The drivers for this device are not installed. (Code 28) There is no driver selected for the device information set or element. To find a driver for this device, click Update Driver. When updating the driver manually and selecting the appropriate driver Windows doesn't believes it's the correct driver, but you can force the installation and it works! The other condition the driver will auto-install is when the same USB device is plugged into a USB 3.0 port. Power related issues are not also causing this as I have tried vi a Docking station, USB hub. etc.. Devices tried; Jabra Headset USB-Mass Storage Device (flash disk and ext HD) MS Wireless Keyboard & Mouse USB Ethernet controller (USB-MAC controller) This is on a laptop part of a Domain with Windows 7 Ent 7601, I am logged in as a local administrator. There isn't any Group Policies blocking not signed driver or whitelisted devices on the domain. Any suggestions please feel free

    Read the article

  • What are the IR codes the new Apple Remote (alu) uses?

    - by index
    I would like to clone the new Apple Remote (infrared, second generation, aluminium) just for fun with a microcontroller. Most codes of the previous model can be found in the LIRC remote control database (all except the key combinations menu + <<,play, which unpair, change ID, pair the remote. I also don't know which bit encodes the battery status. It uses a modified 32 bit NEC protocol (reverse LIRC codes bytewise). But the new Apple remote uses two additional codes for the play and the new select button. I don't have a mac, so I can't brute force test codes either ;-) So if someone possesses such a remote and the ability of recording those two new buttons and three combinations I'd really appreciate it. If you can't run LIRC (or it gets confused by the new codes) and you don't have an oscilloscope or logic analyser, maybe you could hook up a photo diode to your sound input and record the codes with Audacity? Just hit record, hit each button and combo a few times, hit stop, upload the uncompressed WAV file to a sharing site, done. That'd be great!

    Read the article

  • ACER ASPIRE V3-571G-9435 Fan not kicking in leading to overclocking

    - by brythespy
    This laptop has always had this problem. The temperatures kick up to the thermal ceiling of 99C for the CPU (i7-3610QM) and 94C for the GPU (GT 640M). Problem is, the FAN doesn't give a damn. It's actually QUIETER when the temperatures are that high, than when it's at 60C or so. I figured it was a problem with the BIOS, so I updated that, no change. So maybe it was a problem with windows? Nope, same result on gaming with Ubuntu. The major problem of this, is that after gaming for ten minutes the CPU throttles itself to 1197MHz(as opposed to 3193), and the GPU goes down to 135MHz( as opposed to 843MHz). The problem is that the fan won't kick in like I know it can, because when the laptop is in POST, like at BIOS setup, the fan is like a vacuum cleaner it's so loud! I don't really care about noise, so I'd love to have the fan like that all the time as long as the temperatures don't fly through the roof... So, things I've tried so far, to avoid possible duplicate answers. Checked for dust: It's been this way since the laptop was new, and I've since then taken it apart. No dust buildup. Background stuff running?: No, problem persists across OS'es, and it happens while gaming anyways Manually underclocking both CPU/GPU: Using windows, I can force the CPU to stay at 1.1GHz, but the temperature STILL easily hits 99C after 5 min of gaming Contacted Acer support?: No help at all. They told me to update and reset the BIOS, which I have done multiple times. There are only about 6 changeable things anyway, none of which should affect the FAN control Third party fan control program?: None detect the fan So, I'm screwed until I can afford to replace this laptop, but I am very satisfied with performance in games... Whenever the CPU/GPU aren't being throttled. Anyone that can offer advice to solve this problem would be greatly appreciated. Hell, if you solved my problem I'd send you some monies through paypal.

    Read the article

  • Gigabit LAN not working on ASUS M2N-MX

    - by chmod
    Today I replace my FastEthernet switch with a newly bought gigabit switch (DGS-1008A). All computers in my house are displaying that the connection speed is 1 Gbps except for one. The computer that is not working is an ASUS M2N-MX which contain an onboard gigabit NIC. See ASUS link for confirmation http://www.asus.com/Motherboards/AMD_AM2/M2NMX/ Here are some info of the machine OS: Windows 7 Ultimate SP1 64bit BIOS version: 1004 (latest) Driver: installed via Windows update (latest from Windows update) Windows Update: fully updated The machine is reformatted 3 days ago, so it's pretty clean, no junk, no virus, etc Cable: Amp CAT5E 5 meters In device manager, the name of the NIC is "NVIDIA nForce 10/100/1000 Mbps Ethernet" What I have try: I did try to install the driver provided in ASUS website, but there isn't any for Windows 7 64 or Vista 64. I did try to install the latest nForce340/6100, downloaded from Nvidia website. However, the LAN driver refuse to install, it complain that I already have the best driver installed. I looking in the property -- advance tab -- Speed/duplex settings, in an attempt to force it to run at 1000Mbps, but there is no 1000Mbps choice, only 10 and 100Mpbs. I change the CAT5E cable (use one from another computer that is running gigabit without problem) Anyone have this issue or know how to solve it? Thanks

    Read the article

  • How can I install iTunes in such a way that it can't put any "hooks" or helper programs on my computer?

    - by Joshua Carmody
    I'm buying a new iPad, which means I must once again install iTunes. I've not used iTunes in more than 6 months, since I bought a new computer. I don't like iTunes, but I can live with using it to buy/manage media and sync my Apple devices when the program is open. What I would like to do though, is find a way to install iTunes in such a way that it has absolutely no effect on my system when it is closed. iTunes normally installs several helper programs such as iTunesHelper.exe, and the Bonjour service. These programs run in the background when iTunes is closed. You can force-close them, or remove them from your setup files, but iTunes will often put them right back when you run it. I know these programs are mostly harmless, but they have at times caused issues such as iTunes spending system resources trying to catalog media files or drives connected to VPN, or other issues. At best they're just one more small background process eating up a small piece of my CPU time and RAM. How can I run iTunes without letting it get it's "hooks" into my system? One thought I had is that I could create a Windows user account just for iTunes, and deny it admin privileges. Then if I installed iTunes using that account maybe anything it installed wouldn't affect the "main" account on my PC? But I'm not sure if that would work.... Failing that, maybe some kind of virtualization software or sandbox I could install it in? I'm open to any suggestions. My system is an Intel-based PC running Windows 7 Professional 64-bit. Thanks!

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • nginx- Rewrite URL with Trailing Slash

    - by Bryan
    I have a specialized set of rewrite rules to accommodate a mutli site cms setup. I am trying to have nginx force a trailing slash on the request URL. I would like it to redirect requests for domain.com/some-random-article to domain.com/some-random-article/ I know there are semantic considerations with this, but I would like to do it for SEO purposes. Here is my current server config. server { listen 80; server_name domain.com mirror.domain.com; root /rails_apps/master/public; passenger_enabled on; # Redirect from www to non-www if ($host = 'domain.com' ) { rewrite ^/(.*)$ http://www.domain.com/$1 permanent; } location /assets/ { expires 1y; rewrite ^/assets/(.*)$ /assets/$http_host/$1 break; } # / -> index.html if (-f $document_root/cache/$host$uri/index.html) { rewrite (.*) /cache/$host$1/index.html break; } # /about -> /about.html if (-f $document_root/cache/$host$uri.html) { rewrite (.*) /cache/$host$1.html break; } # other files if (-f $document_root/cache/$host$uri) { rewrite (.*) /cache/$host$1 break; } } How would I modify this to add the trailing slash? I would assume there has to be a check for the slash so that you don't end up with domain.com/some-random-article//

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Linux boot - stop the kernel switching to a new framebuffer mode clearing output

    - by Avio
    I'm working on an embedded system (based onUbuntu 12.04 LTS) and I'm customizing its kernel. I'm having some problem with upstart, mountall and plymouth. Nothing unsolvable I suppose, but the real problem is that I can't diagnose properly what's going on because the kernel (or maybe plymouth) changes the video mode in the middle of the boot process. This completely wipes entire lines of log and prevents any debugging of kernel misconfigurations. My Grub2 config seems to be ok with: GRUB_CMDLINE_LINUX="" GRUB_CMDLINE_LINUX_DEFAULT="acpi=force noplymouth" GRUB_GFXMODE=1024x768x32 GRUB_GFXPAYLOAD_LINUX=keep Here is some relevant output of lspci: 00:00.0 Host bridge: Intel Corporation Mobile 945GSE Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GSE Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) And here is the relevant portion of my kernel configuration: CONFIG_AGP=y CONFIG_AGP_INTEL=y CONFIG_VGA_ARB=y CONFIG_VGA_ARB_MAX_GPUS=16 CONFIG_DRM=y CONFIG_DRM_KMS_HELPER=y CONFIG_DRM_I915=y CONFIG_DRM_I915_KMS=y CONFIG_VIDEO_OUTPUT_CONTROL=y CONFIG_FB=y CONFIG_FB_BOOT_VESA_SUPPORT=y CONFIG_FB_CFB_FILLRECT=y CONFIG_FB_CFB_COPYAREA=y CONFIG_FB_CFB_IMAGEBLIT=y CONFIG_FB_MODE_HELPERS=y CONFIG_FB_VESA=y CONFIG_BACKLIGHT_LCD_SUPPORT=y CONFIG_BACKLIGHT_CLASS_DEVICE=y CONFIG_VGA_CONSOLE=y CONFIG_VGACON_SOFT_SCROLLBACK=y CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=640 CONFIG_DUMMY_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y CONFIG_FONT_8x8=y CONFIG_FONT_8x16=y CONFIG_LOGO=y CONFIG_LOGO_LINUX_MONO=y CONFIG_LOGO_LINUX_VGA16=y CONFIG_LOGO_LINUX_CLUT224=y Every other custom/stock kernel boot fine with that Grub2 config. What I would like to have is a single flow of messages on a single console (retaining one screen resolution) from the bootup logo till the login prompt. Does anybody know what I have to tweak to achieve this?

    Read the article

  • Set primary group of file or directory on Samba share from Windows

    - by Hubert Kario
    Short version: I have such situation on a Samba share: $ ls -lha total 12K drwxr-xr-x 3 hka Domain Users 4.0K Jan 11 17:07 . drwxrwxrwt 19 root root 4.0K Jan 11 17:06 .. drwxr-xr-x 2 hka Domain Users 4.0K Jan 11 17:07 dir A -rw-r--r-- 1 hka Domain Users 0 Jan 11 17:07 file A How am I able to change this to following using only Windows SMB/CIFS client (using 3rd party applications is OK) $ ls -lha total 12K drwxr-xr-x 3 hka Domain Users 4.0K Jan 11 17:07 . drwxrwxrwt 19 root root 4.0K Jan 11 17:06 .. drwxr-xr-x 2 hka ntpoweruser 4.0K Jan 11 17:07 dir A -rw-r--r-- 1 hka ntpoweruser 0 Jan 11 17:07 file A Rationale and background info I'm using POSIX ACLs on Samba shares. Together with acl group control for Samba, it allows me to delegate management of permissions to different users based on group membership. Thing is, when I create a new file on a Samba share, I'm unable to set its primary group (the one that grants permission to change its permissions). It's being set to my primary group (Domain Users) or group set using force group option in smb.conf share definition. Removing all groups in windows except the one I want to become the new primary group doesn't work. I can change it using chgrp group folder/ as regular user though shell, but it's suboptimal (not all users are *nix users). Trying to set new owner to group from Windows file permission window makes the Samba to return permission denied with following log entry: [2012/01/05 21:13:03.349734, 3] smbd/nttrans.c:1899(call_nt_transact_set_security_desc) call_nt_transact_set_security_desc: file = projects/project A/New folder, sent 0x1 [2012/01/05 21:13:03.349774, 3] smbd/posix_acls.c:1208(unpack_nt_owners) unpack_nt_owners: unable to validate owner sid for S-1-5-21-4526631811-884521863-452487935-11025 [2012/01/05 21:13:03.349804, 3] smbd/error.c:80(error_packet_set) error packet at smbd/nttrans.c(1909) cmd=160 (SMBnttrans) NT_STATUS_INVALID_OWNER The SID is correct and belongs to group I specified in GUI.

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • Easiest way to allow direct HTTPS connection in Intercept mode?

    - by Nicolo
    I know the SSL issue has been beaten to death I'm using DNS redirect to force my clients to use my intercept proxy. As we all know, intercepting HTTPS connection is not possible unless I provide a fake certificate. What I want to achieve here is to allow all HTTPS requests connect directly to the source server, thus bypassing Squid: HTTP connection Proxy by Squid HTTPS connection Bypass Squid and connect directly I spent the past few days goolging and trying different methods but none worked so far. I read about SSL tunneling using the CONNECT method but couldn't find any more information on it. I tried a similar method in using RINETD to forward all traffic going through port 443 of my Squid back to the original IP of www.pandora.com. Unfortunately, I did not realize all other HTTPS requests are also forwarded to the IP of www.pandora.com. For example, https://www.gmail.com also takes me to https://www.pandora.com Since I'm running the Intercept mode, the forwarding needs to be dynamic and match each HTTPS domain name with proper original IP. Can this be done in Squid or iptables? Lastly, I'm directing traffic to my Squid server using DNS zone redirect. For example, a client requests www.google.com, my DNS server directs that request to my Squid IP, then my transparent Squid will proxy that request. Will this set up affect what I'm trying to achieve? I tried many methods but couldn't get it to work. Any takes on how to do this?

    Read the article

  • Exceptional slowdown of robocopy copying from VM to DFS array

    - by user1588867
    I've got an old win 2003 VM (VMware) on a blade cluster of VMs that I'm moving a considerable amount of files to our new DFS array. There are two main folders with about 1.7 million and half a million smaller files (letters, memos, and other smaller files) respectively. Total size is ~420 GB and ~100 GB. We're using the gui version of robocopy on the server to copy the files. We had initiated a file copy about a month ago to test the process and found that it was taking around 4 hours for the large file. Now that I'm in the process of actually switching the files over it has been taking 18-20 hours. Nothing has changed on the server side and nothing has changed on the settings of the copy (no logs, 1 retry with a wait of 1 second). Our intent is to shut off the share and force the copy over again to get all the files that have been left out of the copy due to being locked by users. I can't take a 20 hour outage to do that though. Does anyone have any theories about what could be causing such a delay for robocopy compared to previously shorter runs?

    Read the article

  • Use external display from boot on Samsung laptop

    - by OhMrBigshot
    I have a Samsung RV511 laptop, and recently my screen broke. I connected an external screen and it works fine, but only after Windows starts. I want to be able to use the external screen right from boot, in order to set the BIOS to boot from DVD, and to then install a different OS and also format the hard drive. Right now I can only use the screen when Windows loads. What I've tried: I've tried opening up the laptop and disconnecting the display to make it only find the external and use the VGA as default -- didn't work. I've tried using the Fn+key combo in BIOS to connect external display - nothing I've been looking around for ways to change boot sequence without entering BIOS, but it doesn't look like it's possible. Possible solutions? A way to change boot sequence without entering BIOS? Someone with the same brand/similar model to help me blindly keystroke the correct arrows/F5/F6 buttons while in BIOS mode to change boot sequence? A way to force the external display to work from boot, through modifying the internal connections (I have no problem taking the laptop apart if needed, please no soldering though), through BIOS or program? Also, if I change boot sequence without accessing external screen, would the Ubuntu 12.1 installation sequence attempt to use the external screen or would I only be able to use it after Linux is installed and running? I'd really appreciate help, I can't afford to fix the screen for a few months from now, and I'd really like to make my computer come back to decent performance! Thanks in advance!

    Read the article

  • Terratec Cinergy Hybrid T USB XS is not recognized anymore on Mac OS X

    - by Gabble
    I have used Terratec Cinergy Hybrid T USB XS for years now, alongside with Elgato EyeTV software. I am a happy and completely satisfied user! Since a couple of weeks the USB sitck stopped working on my MacPro1,1 (OS version 10.6.3): EyeTV does not see any device attached, and actually, the green led on the stick stays off. It is not a USB port fault: I have unplugged any other USB/Firewire device and tried with different USB ports, to no avail (any other USB devices work as expected on any port) I have completely uninstalled EyeTV software, including preferences and system daemons/extensions, rebooted and reinstalled the latest EyeTV. No way. Reset the PRAM. Nope. Checked the Apple System Profiler - USB: No device attached, The MacPro does not see it at all. I need to say that: a) The device worked as a charm even with the latest OS 10.6.x (so it's not a OS upgrade cause). b) I have plugged the Terratec Cinergy Hybrid T USB XS to my MacBook5,1 where EyeTV is not installed and was never installed: The green led on the stick turns on, the growl bubble pops up, and the device is perfectly recognized by the system. Apple System Profiler says (sorry, Italian language): Cinergy Hybrid T USB XS (2882): ID prodotto: 0x005e ID fornitore: 0x0ccd Versione: 1.10 Numero di serie: 061102005755 Velocità: Fino a 12 Mb/sec Produttore: TerraTec Electronic GmbH ID posizione: 0x04100000 Corrente disponibile (mA): 500 Corrente necessaria (mA): 500 At this point I am pretty sure the Terratec stick is not damaged and there is something wrong with my MacPro. I kindly ask you: Is there a way to force my MacPro recognize the USB device? What can I check? Is there something that caches USB connection that can be reset? A OS reinstall would be the very last resort for me. Thanks in advance for any help you will offer!

    Read the article

  • Weird glitches on Intel iGPU

    - by FrederikVds
    I have a weird problem that I can't manage to describe in one word, so I'm having trouble searching for a solution. My monitors sometimes go black for a tenth of a second. Other times, they show the image shifted a few centimeters to the left or to the right. This happens on both of my monitors, but not necessarily at the same time. I would say it happens about once a minute, unless under heavy load, in which case it can happen every second or so. Interestingly, heavy CPU/memory usage can also cause this, not just heavy GPU usage. This only happens when they are both at 1920x1080, not when one of them, or both, are at a lower resolution. It also happens when they are in mirrored mode instead of extended desktop mode. My GPU is obviously not at full clock speed most of the time: this happens at 350 MHz as well as at 1200 MHz. So it doesn't seem like a matter of too little performance. I'm not seeing any traditional artefacts, even when I use MSI Kombustor, only this kind of full-screen glitches. CPU stressing software isn't reporting any issues either. I'm thinking maybe the connection between my CPU and my PCH isn't fast enough, but I can't find anyone with the same problem to confirm that. I'd rather not invest in a discrete GPU without being certain it will fix something. Does anyone know how to search for this, or even better, does anyone have a solution? My full specs are below. Thanks in advance! Specs: ASUS P8Z77-M Intel Core i5-3570K (with Intel HD 4000 Graphics) 2x4 GB AMD Performance Edition memory Corsair Force 3 Series Rev. B 120GB SSD Maxtor 200GB HD Lite-On DVD-RW Antec 350 Watt PSU 64-bit Windows 7 Professional 2x Iiyama ProLite E2208HDS display

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • How To Boot with "mem=1024m" Argument using GRUB - Ubuntu 10.04

    - by nicorellius
    I am still working on this question. This new one is a different question so I thought it would be good to post a new question. Is this the proper protocol or should I have just edited the other question? I'm running Ubuntu 10.04 with the kernel 2.6.32-22-generic on a Toshiba Satellite laptop. When I enter the GRUB menu (I have Ubuntu 9.10 installed as well), I can choose which kernel to boot. I use scroll down to the one I want and press "e" and I expect to be able to enter mem=1024m and force the kernel to use this much memory. But when I run cat /proc/meminfo or look in the process manager after booting wth this argument I still see all the RAM: ~2 GB. Am I using this boot argument incorrectly? The boot configuration (before I add anything) looks like this: insmod ext2 set root=(hd0,1) search --no-floppy --fs-uuid --set 10270f21-1c42-494b-bd3f-813c23f6d\ 518 linux /boot/vmlinuz-2.6.32-22-generic root=UUID=10270f21-1c42-494b-b\ d3f-813c23f6d518 ro quiet splash initrd /boot/initrd.img-2.6.32-22-generic The way I did this was that I added the mem=1024m after the last line and pressed Ctrl+x (Emacs save and boot the kernel) and the system booted. I tried adding mem=1024m to the end and the beginning of this list and it appeared to not change the RAM allocation.

    Read the article

  • Not able to connect to port different than 22 - OpenVPN

    - by t8h7gu
    I have OpenVPN network with 5 clients. Computer with Arch Linux which hosts OpenVPN server, It also hosts virtual machine with Computer with CentOS which is also connnected to OpenVPN subnet. Windows 8 which hosts virtual machine with CentOS. Both of them are connected to OpenVPN. Last one machine is virtual machine with CentOS which is hosted by computer with Ubuntu 14( which is not connected to OpenVPN. All machines in OpenVPN subnet are bolded. All phisical computers are in different networks. The problem is that when I use nmap to scan Windows and it's guest virtual machine it's saids that host seems down. When I force namp to scan specific port it shows filtered state: nmap -Pn -p 50010 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 19:49 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.11s latency). rDNS record for 10.8.0.3: node3.com PORT STATE SERVICE 50010/tcp filtered unknown Telnet also cannot connect to this port telnet n3 50010 Trying 10.8.0.3... telnet: Unable to connect to remote host: No route to host But ss on this host show's proper state of this port ss -anp | grep 50010 LISTEN 0 50 10.8.0.3:50010 *:* users:(("java",12310,271)) What might be possible reason of that and how to fix it? EDIT I've found that I am able to connect via telnet to ssh port: telnet n3 22 Trying 10.8.0.3... Connected to n3. Escape character is '^]'. SSH-2.0-OpenSSH_5.3 So it seems that it's not problem with Windows firewall. But I have no idea what it might be. Also nmap result for first thousand ports: nmap -Pn -p 1-1000 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 20:08 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.49s latency). rDNS record for 10.8.0.3: node3.com Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 77.87 seconds

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • Apache Redirect is redirecting all HTTP instead of just one subdomain

    - by David Kaczynski
    All HTTP requests, such as http://example.com, are getting redirected to https://redmine.example.com, but I only want http://redmine.example.com to be redirected. For example, requests for I have the following in my 000-default configuration: <VirtualHost *:80> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public Redirect permanent / https://redmine.example.com </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Here is my default-ssl configuration: <VirtualHost *:443> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown <Directory /usr/share/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> LogLevel info ErrorLog /var/log/apache2/redmine-error.log CustomLog /var/log/apache2/redmine-access.log combined </VirtualHost> <VirtualHost *:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Is there anything here that is cause all HTTP requests to be redirected to https://redmine.example.com?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >