Search Results

Search found 9106 results on 365 pages for 'course'.

Page 235/365 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Setup site folders on Apache and PHP

    - by Cobus Kruger
    I'm trying to set up my first Apache server on my Windows PC at home and I have real trouble finding out which configuration settings go where. I downloaded and installed XAMPP which seemed to get everything nicely set up and can see a working website on http://localhost. So far so good. The point of this is to develop a website of course, and to make my life easier (irony?), I wanted to let the web site root point to my Eclipse project folder. So I opened httpd-vhosts.conf, uncommented a VirtualHost block and changed its DocumentRoot to my local path. Now when I try to load http://localhost I get a 403 (Access denied) error. So where do I configure permissions for my folder? And is that all I need to let my site run from the folder specified or am I going to have to clear another hurdle? Update: I tried to simplify things a little, so I reinstalled XAMPP and got back to a working http://localhost. Then I confirmed that httpd-vhosts.conf is included in httpd.conf and made the following changes to httpd-vhosts.conf: Uncommented the line NameVirtualHost *:80 Added a virtual host shown below. Restarted Apache and saw the expected page on http://localhost <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/" ServerName localhost ErrorLog "logs/dummy-host2.localhost-error.log" CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> I then created a new folder named C:\testweb, added an index.html file and changed the DocumentRoot line shown above. For all intents and purposes I would then expect the two configurations to be equivalent. But this setup gives me an error 403. Even though the C:\testweb folder already had the same permissions as the C:\xampp\htdocs folder, I then went further and gave the Everyone group full control of C:\testweb and got exactly the same problem. So what did I miss?

    Read the article

  • 64 bit vs 32 bit

    - by user53864
    When I was doing my course MCSA, I'm taught the following: With a 32-bit processor only 32-bit operating system can be installed. with a 64-bit processor both 32-bit & 64-bit operating system can be installed It's said 64-bit os cannot be installed on a 32-bit processor. I just want to make sure the above points because recently I'm asked to installed Windows Server 2008 R2 Enterprize and while installation it showed only x64 and it simply installed it. I was thinking all the computers in my office having a 32-bit processor. If so how it could be possible to install a x64 bit os on a 32-bit processor? or I'm wrong with the 1st point or the processor may be of 64-bit(I don't know how to check). I'm confused... One thing what I know the benefits of 64-bit over 32-bit is faster operation. If anyone could tell me other benefits, it could be helpful for me. Thanks!

    Read the article

  • 64 bit vs 32 bit

    - by user51737
    When I was doing my course MCSA, I'm taught the following: With a 32-bit processor only 32-bit operating system can be installed. with a 64-bit processor both 32-bit & 64-bit operating system can be installed It's said 64-bit os cannot be installed on a 32-bit processor. I just want to make sure the above points because recently I'm asked to installed Windows Server 2008 R2 Enterprize and while installation it showed only x64 and it simply installed it. I was thinking all the computers in my office having a 32-bit processor. If so how it could be possible to install a x64 bit os on a 32-bit processor? or I'm wrong with the 1st point or the processor may be of 64-bit(I don't know how to check). I'm confused... One thing what I know the benefits of 64-bit over 32-bit is faster operation. If anyone could tell me other benefits, it could be helpful for me. Thanks!

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • Proftpd on Debian ignoring umask setting

    - by sodan
    I have found a solution for my problem. This is what I did: I added the following to my /etc/proftpd/proftpd.conf: <Limit SITE_CHMOD> DenyAll </Limit> I have the following problem: When I upload files to my FTP server the umask I set is totally ignored. All files have permissions 644. I use Debian 5.0.3 as operating system and proftpd 1.3.1 as ftp server. The user logging in is called mug and he is a local user (no virtual user). He is chrooted to the home directory /home/mug/ I tried the following things: 1. set umask setting in /etc/proftpd/proftpd.conf Umask 000 000 This should result in 777 for directories and 666 for files since directory umask is applied to 777 and file umask is applied to 666. After that I of course restarted the proftpd to be sure that the config is reloaded. 2. set umask for the user in /home/mug/.bashrc I added the following to the .bashrc for the user: umask 0000 After that I reloaded the .bashrc: source /home/mug/.bashrc I also checked the umask setting for the user by changing to the user and using this command: su mug umask As result I got a umask of 0000 prompted. So this worked. But still all my uploaded files are having 644 permissions set :( What am I doing wrong?

    Read the article

  • SquidGuard and Active Directory: how to deal with multiple groups?

    - by Massimo
    I'm setting up SquidGuard (1.4) to validate users against an Active Directory domain and apply ACLs based on group membership; this is an example of my squidGuard.conf: src AD_Group_A { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) } src AD_Group_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } dest dest_a { domainlist dest_a/domains urllist dest_b/urls log dest_a.log } dest dest_b { domainlist dest_b/domains urllist dest_b/urls log dest_b.log } acl { AD_Group_A { pass dest_a !dest_b all redirect http://some.url } AD_Group_B { pass !dest_a dest_b all redirect http://some.url } default { pass !dest_a !dest_b all redirect http://some.url } } All works fine if an user is member of Group_A OR Group_B. But if an user is member of BOTH groups, only the first source rule is evaluated, thus applying only the first ACL. I understand this is due to how source rule matching works in SquidGuard (if one rule matches, evaluation stops there and then the related ACL is applied); so I tried this, too: src AD_Group_A_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } acl { AD_Group_A_B { pass dest_a dest_b all redirect http://some.url } [...] } But this doesn't work, too: if an user is member of either one of those groups, the whole source rule is matched anyway, so he can reach both destinations (which is of course not what I want). The only solution I found so far is creating a THIRD group in AD, and assign a source rule and an ACL to it; but this setup grows exponentially with more than two or three destination sets. Is there any way to handle this better?

    Read the article

  • Variable size encrypted container

    - by Cray
    Is there an application similar to TrueCrypt, but the one that can make variable size containers opposed to fixed-size or only-growing-to-certain-amount containers which can be made by TrueCrypt? I want this container to be able to be mounted to a drive/folder, and the size of the outer container not be much different from the total size of all the files that I put into the mounted folder, while still providing strong encryption. If to put it in other words, I want a program like truecrypt, which not only automatically grows the container if I put in new files, but also decreases it's size if some files are deleted. I know there are some issues of course, and it would not work 100% as truecrypt, because it basically works on the sector level of the disk, giving all the filesystem-control to the OS, and so when I remove a file, it might as well be left there, or there might be some fragmentation issues that would stop just truncating the volume from working, but perhaps a program can be built in some other way? Instead of providing sector-level interface, it would provide filesystem-level interface? A filesystem inside a file which would support shrinking when files are deleted?

    Read the article

  • Identifying mail account used in CRAM-MD5 transaction

    - by ManiacZX
    I suppose this is one of those where the tool for identifying the problem is also the tool used for taking advantage of it. I have a mail server that I am seeing emails that spam is being sent through it. It is not an open relay, the messages in question are being sent by someone authenticating to the smtp with CRAM-MD5. However, the logs only capture the actual data passed, which has been hashed so I cannot see what user account is being used. My suspicion is a simple username/password combo or a user account's password has otherwise been compromised, but I cannot do much about it without knowing what user it is. Of course I can block the IP that is doing it, but that doesn't fix the real problem. I have both the CRAM-MD5 Base64 challenge string and the hashed client auth string containing the username, password and challenge string. I am looking for a way to either reverse this (which I haven't been able to find any information on) or otherwise I suppose I need a dictionary attack tool designed for CRAM-MD5 to run through two lists, one for username and one for password and the constant of the challenge string until it finds a matching result of the authentication string I have logged. Any information on reversing using the data I have logged, a tool to identify it or any alternative methods you have used for this situation would be greatly appreciated.

    Read the article

  • Google Play Music Not Adding MP3s On-Demand

    - by J0e3gan
    My recent attempts to add music on-demand to Google Play Music have yielded nothing - no "Processing music..." or "Added __ of __" messages, just nothing. Previously I could add music on-demand; and nothing has changed on the machine from which I successfully added music previously, from which I have tried to add music on-demand recently. What could be hampering my ability to add music on-demand? WHAT I'VE TRIED: Right after I started using GPM, I briefly found that I could not add music (on-demand), but the problem went away after a logout/login. This time a logout/login has not helped. Dragging & dropping or browsing to folders or files to add has made no difference either. Nor has waiting ridiculously long for GPM to show signs of life after adding music on-demand seemed to work. Digging deeper, I read a related Google Play Help article and followed its suggestions... ran the Google Play Music Manager troubleshooter = no errors or warnings double checked my available storage = 8 GB free double checked supported file types = MP3 is still supported (of course) ..., but the problem remains. UPDATE: I found that if I configure GPM to automatically upload music added to specific folders, it strangely does add automatically what it will not add on-demand.

    Read the article

  • Emails from web site sometimes blank or gibberish

    - by John Gardeniers
    Our company has one web site with an online store based on osCommerce. The system sends emails for various reasons, such as password changes, order confirmations, etc., using PHP's mail() function. We occasionally have customers report that the email they received is either blank (email is plain text format) or gibberish (email is in HTML format). In the latter case it's really just HTML that's being displayed as raw text but of course the customers can't read it. In this case the first opening tag's <, and sometimes a few more characters, has gone missing. In an attempt to determine whether this was happening only for certain customers or email systems I configured the web site to send a CC of each message to a service account at my end. Those CC'd messages always arrive intact and display correctly in Outlook. For what it's worth, it seems to happen a little more frequently to Hotmail users but is certainly not limited to them. As the web site is on a shared (Debian) host there's precious little I can do about debugging things from that end, although if I made the right request I feel the hosting company staff would help me, even though they have limited resources to spend on such matters. Any suggestions on what else I might do to try and determine just why those emails are not being received correctly by some customers, yet a CC copy arrives just fine?

    Read the article

  • Mod_Perl configuration for multiple domains

    - by daliaessam
    Reading the Mod_Perl module documentation, can we configure it on per domain basis, what I mean can we configure it to run on every domain or specific domain only. What I see in the docs is: Registry Scripts To enable registry scripts add to httpd.conf: Alias /perl/ /home/httpd/2.0/perl/ <Location /perl/> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> and now assuming that we have the following script: #!/usr/bin/perl print "Content-type: text/plain\n\n"; print "mod_perl 2.0 rocks!\n"; saved in /home/httpd/httpd-2.0/perl/rock.pl. Make the script executable and readable by everybody: % chmod a+rx /home/httpd/httpd-2.0/perl/rock.pl Of course the path to the script should be readable by the server too. In the real world you probably want to have a tighter permissions, but for the purpose of testing, that things are working, this is just fine. From what I understand above, we can run Perl scripts only from one specific folder that we put the directive above. So the question again, can we make this directive per domain for all domains or for specific number of domains?

    Read the article

  • Fedora, ssh and sudo

    - by Ricky Robinson
    I have to run a script remotely on several Fedora machines through ssh. Since the script requires root priviliges, I do: $ ssh me@remost_host "sudo touch test_sudo" #just a simple example sudo: no tty present and no askpass program specified The remote machines are configured in such a way that the password for sudo is never asked for. For the above error, the most common fix is to allocate a pseudo-terminal with the -t option in ssh: $ ssh -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Let's try to force this allocation with -t -t: $ ssh -t -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Nope, it doesn't work. In /etc/sudoers of course I have this line: #Defaults requiretty ... but I can't manually change it on tens of remote machines. Am I missing something here? Is there an easy fix? EDIT: Here is the sudoers file of a host where ssh me@host "sudo stat ." works. Here is the sudoers file of a host where it doesn't work. EDIT 2: Running tty on a host where it works: $ ssh me@host_ok tty not a tty $ ssh -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. $ ssh -t -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. Now on a host where it doesn't work: $ ssh me@host_ko tty not a tty $ ssh -t me@host_ko tty not a tty Connection to host_ko closed. $ ssh -t -t me@host_ko tty not a tty Connection to host_ko closed.

    Read the article

  • How to create an MST for silent install using Orca?

    - by Sanarothe
    Hi. I'm trying to deploy 7zip via GPO; I assigned the original MSI, but the package installation simply doesn't take place. What I've gathered is that I need to create an MST. In the spirit of trying to learn as much as possible about it, I've opted to use Orca rather than a third-party automagic tool, but I'm at a loss as to which fields to edit. So far the only change that I've made is to give the license accepted checkbox a value of "1" instead of pointing to another key that, still, just gave it a value of "1." So, to give this some structure, How does (Or what criteria should I consider) creating a MST make the install noninteractive/silent? Do you have to manually reconfigure the MSI to simply not perform the GUI aspects? Or do I have to execute the program in silent mode after defining the variables the the installer requests? (Though, of course, it seems that would defeat the purpose of the MST) How do I determine which fields I need to edit? I've loaded the installer and it takes three inputs: License acceptance, feature set and installation location. I want all of the default values: I'm just trying to deploy it at all, not customize the installation. I BELIEVE that I should be messing with some values in the Registry table, but I really don't know. If I'm not asking the right questions, can someone point me to a THOROUGH resource or documentation for this process? I've already gone over the technet articles on basic Orca use and deployment, but I couldn't really find anything on creating MST that didn't involve a third party program in which one runs a 'dummy' installer to get the before and after snapshots. Thank you very much, Cameron UPDATE: After spending the day troubleshooting, I finally got my server to send out 7zip, but not until I had also assigned firefox. Not sure why it didn't want to send out 7zip by itself, but I also had some domain naming problems. Thanks for the input (GPResult helped enormously.)

    Read the article

  • Restoring open software after a restart event in windows

    - by Doltknuckle
    I find that at the end of a long day, I sometimes have a large number of programs running. All which I will need to use tomorrow. Normally, this isn't an issue, I can simply lock the machine and come back tomorrow. My problem arrises when windows update launches in the middle of the night and force restarts my computer. That in turns closes all my open software. I of course save everything regularly so I don't loose anything, but I waste time reopening all of those resources whenever there is a restart. [EDIT] I should clarify that I still want to be able to restart my computer when an update comes down. Preventing the restart only delays the problem until later. I should have been more specific in that I want to be able to recover my working environment after a restart for any reason. Things like scheduled maintence, power loss, updates, and software installs. [EDIT] I can't simply have them setup to launch at startup becasuse those files change from week to week. So I need something that monitors what I have open, and gives me the option to "recover" those software sessions when I log back in. Anyone have any suggestions on what I can do? I'd even be willing to purchase software to do this for me if that is the only option. Thanks

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • Is it possible to be a professional studying on your own?

    - by Marc Jr
    I read economics at university(nothing to see with linux, isn't it? :P). I have some basic knowledge about booting process, Linux Kernel compiling from source and stuff like that. But of course I have still much to learn sometimes some errors appears and "voila" I am lost. I had: Ubuntu, Fedora, OpenSuse, Arch.. using Gentoo now. I'd like to know what you linux users, professionals, administrators... would think it is the best way to learn linux in a professional way. Is it worth studying it and passing the LPIC test enough to work in the linux world? or do I need going to IT uni? I've heard LFS is a good way of learning about linux, is that real? I've been thinking about getting to LFS learn about more deeply about the linux process and learning scripts. It is possible to do this way? if anyone has a tip or a good way of doing, maybe someone did it. Any tip is very welcome. Words from a person in love with linux. :D The best, Marc

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

  • GNOME 2 + Compiz equivalent?

    - by virtualeyes
    Running Fedora 14 and realize I need to either change distros or find an alternative to GNOME 3 in Fedora 17. Based on what I have read to-date, XFCE and KDE are the go-to WMs if I want to avoid GNOME 3. I tried KDE 4 and I wasn't impressed; I like the simplicity of GNOME 2 with Compiz and Emerald. Can't stay on Fedora 14 forever, however, so...where to turn? Basically looking for these features in my desktop environment: GNOME Do or equivalent Snap to grid/Window tiling A must-have, the ability to hot key focused window to a monitor grid region is a huge productivity win. Zoom window to cursor In a multi-monitor setup sometimes it's nice to, say, GNOME Do terminal in one monitor and then hot key the opened window to the other monitor just by zipping the mouse cursor anywhere on target monitor (followed by, of course, snap-to-grid hotkey, all without a single mouse click) Polarization At night white background hurts the eyes, so I prefer to hot key polarize to black. Multi-monitor support I'm partial to Fedora given that I've worked with CentOS for years and have little experience with any other Linux distro; however, if the difference between Fedora and Arch, Mint, etc. is fairly subtle, I'll make the leap, just need a distro & desktop environment that allows me to be productive with keyboard hot keys and provides the above basic features. Any suggestions?

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • AMIs in Amazon EC2

    - by Jack of Trades
    I really like the Amazon EC2 environment, and thought I'll spend a bit of time playing around with various types of public (Windows!) AMI servers. But testing has been a bit, well, questionable. Some of my findings: It's very difficult to know what exactly a specific public EC2 image is supposed to be doing. Many images come with little to no information. I can't seem to find the passwords to log onto various windows images. Why are they public if they can't be used!? Lots of images are based on S3, and not EBS backed. This is very annoying, as S3 takes a lot longer to do pretty much anything (stop, image etc.) I am only testing images here, so of-course I don't question the value of S3 for other attributes. The description of what an image does is almost useless and many times confusing. Have others come across these EC2 issues. Again, my interest was to just play around with public images for testing/experimentation/etc, and therefore these issues may not be too relevant for more normal EC2 deployment uses.

    Read the article

  • How to keep Ubuntu 11.10 and Kate editor w/terminal from changing command line when changing tabs?

    - by Kairan
    I am programming C using Kate editor in Ubuntu 11.10. It works great, but when I change tabs in Kate, the terminal line changes to the file path of the tab I click on. Normally this is not a big deal (other than annoyingly adding extra text to my terminal) however if I am currently RUNNNING a C program, it obviously will type at the command line, which is not so cool. Example terminal window for my C program (its at a menu): 1) select opt 1 2) select opt 2 Enter choice: (here it waits for prompt from user) Now when I click a tab in Kate, it wants to put in the cd / path of the file in that tab, such as: cd /home/user/os/files And of course since my terminal was waiting for prompt from user it gets that command.. not good. Perhaps there is no fix, but maybe someone knows? Obviously I could choose NOT to switch tabs or end program before switching tabs... Note: I probably made the mistake of putting this under StackOverflow which is more of a programming area - so though repost here might be best (I am not sure how to link the questions but will paste hyperlink to that post - I dont want to violate any stackoverflow/superuser violations) Suggestions on merging them are welcome or if I should delete one? StackOverFlow Question

    Read the article

  • Unable to authenticate to Windows Server 2003 for file browsing as non-administrator user.

    - by Fopedush
    I've got a windows server 2003 box containing a raid 5 array I use for mass storage. I want to set up a special non-administrator account that can be used to browse files over the network, with only read access. Ideally I'll map my network drive as this user to avoid accidentally hosing my data, and mount as an administrator user on occasions where I actually need write access. I've created a non-administrator user on the Windows Server box (called "ReadOnly)", and granted the user read permissions on the folders I need. However, when I try to browse to the files, and authenticate as this user, I'm told "Permission denied". If I throw the readOnly user into the administrators group, however, I can authenticate and browse just fine. I am, of course, only attempting to browse to folder for which I have given this user read permissions. Obviously my ReadOnly user is missing some privilege here, but I can't figure out what it is. I've been digging around in group policy editor all day to no avail. What am I missing? Fake Edit: I'm doing my browsing from a Windows 7 box, but I don't think that is relevant.

    Read the article

  • Does Windows 8 RTM Support VB6 (SP6) Runtime files? If so, which ones?

    - by user51047
    Basically, I'm trying to find out which of the following files come packaged with the Windows 8 RTM (that is, the final version). Just to be clear, we're not wanting to know if any of the runtime files (listed below) are or were included with any of the previous versions (Beta, CTP, RS etc) or releases of Windows 8 - we are just interested in this compatibility question as far as Windows 8 RTM (Final Version) is concerned. In addition, if possible, we would also like to know which of the below files (if any) come shipped and registered with the Windows 8 RT (on ARM) version. As far as the ARM version is concerned, you're welcome to base your answer on the latest version of Windows 8 RT (on ARM) that is available at the date and time your answer is posted. (This will also serve to future-proof this question as additional releases or versions of Windows 8 and Windows 8 RT on ARM come out). Here are the list of files (which are basically the VB6 SP6 runtime files): File name Version Size Asycfilt.dll 2.40.4275.1 144 KB (147,728 bytes) Comcat.dll 4.71.1460.1 21.7 KB (22,288 bytes) Msvbvm60.dll 6.0.97.82 1.32 MB (1,386,496 bytes) Oleaut32.dll 2.40.4275.1 584 KB (598,288 bytes) Olepro32.dll 5.0.4275.1 160 KB (164,112 bytes) Stdole2.tlb 2.40.4275.1 17.5 KB (17,920 bytes) Of course, the most important file in there is MSVBVM60.DLL, so if you cannot provide details for all files relating to both Windows Releases, then basing the answer on as many of the files possible would also be useful. Thank you for reading and for your anticipated assistance in putting this question/answer on record.

    Read the article

  • New Windows Server 2008 R2 WIMP running slower than Windows Server 2003

    - by starshine531
    We recently upgraded a WIMP server from Windows Server 2003 (32 bit) to Windows Server 2008 R2 (64 bit). The new server has significantly better hardware than the old server, yet many processes take much longer than the old box. We have a rather complex web application process that normally takes about 7 seconds on the old box, but on the new one it takes 11-12 seconds. That's down from 15.5 seconds it took before I disabled IPV6. This process involves some queries (some of them involve transactions with maybe 3 queries between the start and commit) and creating and emailing some pdfs. Windows updates are current with a more or less fresh machine. This happens consistently even when we have almost no traffic on the site and memory and cpu aren't being hard pressed at all. The only differences between the servers other than the OS and hardware: 1) When available, we used 64 bit versions of programs 2) The new server uses MySQL 5.5 rather than MySQL 5.1 (I did run the mysql_upgrade program and we use InnoDB for the engine) 3) The new server uses PHP Version 5.3.18 rather than PHP Version 5.3.1 4) With the new OS came IIS7 rather than IIS6 of course. What could be causing better hardware to run so much slower? Let me know if you need more details. Thank you.

    Read the article

  • Laptop recommendation - Portable Gaming

    - by ivan
    So, I'm looking for a new laptop (http://superuser.com/questions/116869/toshiba-satellite-u500-totally-damaged-lcd). My requirements for a new Laptop are: -good keyboard(illuminated) and touchpad (multi-media keys included, should be better than toshiba u500) -good graphics card, with system rating of 6.3 and up for gaming graphics (my Toshiba U500 has 6.3). I used to run some heavy games on my Toshiba U500 with ATI Mobility Radeon 4570 with 512 mb VRAM but the framerates are not that nice on high settings. -Decent CPU but I think all new Core i3, i5, i7 can run most of recent resource intensive games (My Toshiba U500 has a Core 2 Duo T6500, 2.13 Ghz) I'm also looking for a long-term reliability, good sound quality, lots of fast RAM of-course(4GB DDR3 - 1066Mhz and up) and a clear looking LED screen with a decent resolution. (I can accomodate a laptop with screen size of 13-inch upto 15.6 inch, and I don't want it to be heavy because I might be taking it outdoors) I'm actually impressed when I saw HP Pavilion DV6t but the screen resolution seems to be a little too small for 15.6 inch. The Pavilion DV3 are also good but I want to know if there other options. Looking for some opinions.. Thanks. :D

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >