Search Results

Search found 14764 results on 591 pages for 'interview questions'.

Page 450/591 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Terminal Server CPU usage at 100%

    - by Light1c3
    I'm running a terminal server with around 50-60 users,and every so often the server will go from 40% usage to 100%. I took a closer look an it seems every time this happens, a different user or two seem to be caught in a loop and end up using < 30% where the rest of the users only use a maximum of 5%. The company behind the software we use clame it's due to the servers inadequate hardware (It's a VM system running on a dual - quad core setup) which to me sounds like BS! I'm fairly new to this level of IT so if I misspoke I apologize. I have no way to prove it but I believe adding more raw hardware power wont do me any good as this to me seems like a bug in their software, and it will suck up as much ( or little) CPU as it's given. The VM in question has 4 vCPU cores and 12 GB RAM available, and is running Windows Server 2008, 64-bit Thanks in advance for your help! Note: I have the same question posted on SO, but was pointed in this direction so just in case, here is a link to the post http://stackoverflow.com/questions/17276602/termserver-cpu-at-100

    Read the article

  • 403 Forbiden on Apache (CentOS) Server

    - by pouya
    These are my VM setup: HOST: windows 7 ultimate 32bit GUEST: CentOs 6.3 i386 Virtualization soft: Oracle virtualBox 4.1.22 Networking: NAT -> (PORT FORWARD: HOST:8080 => GUEST:80) Shared Folder: centos all the project files goes into shared folder and for each project file a virtualhost conf file is created in /etc/httpd/conf.d/ like /etc/httpd/conf.d/$domain I wasn't able to see anything in my browser before disabling both windows firewall and iptables in centos after that if i type for example: http://www.$domain:8080/ all i see is: Forbidden You don't have permission to access / on this server. Apache/2.2.15 (CentOS) Server at www.$domain.com Port 8080 A sample Virtual Host conf file: <VirtualHost *:80> #General DocumentRoot /media/sf_centos/path/to/public_html ServerAdmin webmaster@$domain ServerName www.$domain ServerAlias $domain *.$domain #Logging ErrorLog /var/log/httpd/$domain-error.log CustomLog /var/log/httpd/$domain-access.log combined #mod rewrite RewriteEngine On RewriteLog /var/log/httpd/$domain-rewrite.log RewriteLogLevel 0 </VirtualHost> centos shared folder is availabe to guest at /media/sf_centos These are file permissons for sf_centos: drwxrwx--- root vboxsf vboxsf group includes: apache and root So these are my questions: 1- How to solve Forbidden Problem? 2- How to setup both host and guest firewalls? 3- How can i improve this developement environment to simulate production environment as much as possible specially security improvements?

    Read the article

  • Excel controls not visible for certain users

    - by Nossidge
    One of the users of an Excel program I've written is having a weird problem. None of the control objects (Command Button, ComboBox, etc.) are visible to him when he opens the file on his laptop. He is using Excel 2003, the same version I used to create the program, and enables macros using the pop-up when the file loads. I have Googled this, and have found these people who seem to be having the exact same problem, with various versions of Excel. Unfortunately, none of their questions were answered. I can't really explain it any better than this user: If I enter design mode and pull a control from the control toolbar onto a sheet all I see are the drag handles. When not in design mode I have to feel around with the mouse and can click the button which executes the button click code correctly and opens another sheet where again I have to feel around for the buttons to return me to the original sheet. The button I managed to click is now visible but as soon as I click anywhere on the sheet it disappears. I have verified that the visible property of the buttons is set and that the Show All Objects on the Options View tab is selected. If I pull buttons from the Forms toolbar onto a sheet they are visible. If I try to find Objects using F5 when not in design mode Excel reports no objects on the sheet. So, Super Users, can you help? UPDATE: Thanks for your replies, but much like the person in the ozgrid link, the problem has gone away. Not sure why it went, but I can confirm that the user rebooted again and also started up other Excel files that didn't contain controls in the interim. Perhaps that fixed it, or maybe it'll be back again. I'll keep udating with progress, and close if the problem doesn't reoccur for the next few days. Thanks again.

    Read the article

  • Unwanted forced authentication after server restart (Win 2k3)

    - by Felthragar
    We're running a Win 2k3 R2 Standard 64-bit edition server. On this server we're running a fileserver and the ability to allow remote login to our network through vpn. We do not currently utilize a domain setup, all user accounts are local accounts on the server. Each employee is given a unique account to login to the server. The password is a randomly generated 16 character long string, which makes it hard to remember. What we've done is basicly had the password stored on the client machine (standard "Remember Me" functionality). This has worked well. However, last night our server automatically restarted after an automatic update. After that, some of our employees, myself included, had to re-authenticate with the server, submitting our credentials again. Then again, some others did not have to re-authenticate. Do you guys have any idea why this is? Is there a setting to prevent this? I've checked the logs but I couldn't find anything of interest. Then again I'm not really sure what I'm looking for. Thanks in advance, I'll try to answer any additional questions you may have. Edit: When I say "login" or "authenticate" I mean through the standard windows samba protocol. Edit 2: Ok, new day. Tonight the server restarted again, and the same two clients that had to re-authenticate yesterday had to re-authenticate today as well. The rest did not.

    Read the article

  • Installing Windows 8 to another computer with an OEM product Key

    - by user180671
    Questions come up after reading the article about the new method Microsoft uses to License windows 8 computers. Let say I bought a Brand new laptop with windows 8 preloaded. Not like the old way, there is no OEM sticker in the back of the computer which can be used for reloading system.(new product key is stored in BIOS as mentioned in the article, the key can be pulled out by using a software anyway). Is it possible to install windows 8 on another computer with that particular key in case the computer is totally damaged? Here is what i tried: First, I extracted the key with a software name "windows 8 key viewer". using the windows 8 upgrade tool to determine what copy of windows 8 I should download for the installation. The tool did correctly recognized the key as a legitimate one. but it claims that key can not be installed with a retail media (Since it is an oem key). Does this mean the only way to do it is to use an OEM CD from the manufacturer? Will ISO from MSDN source do? or it is just not possible??

    Read the article

  • Verification of downloaded package with rpm

    - by moooeeeep
    I wanted to install a package on CentOS 6 via rpm (e.g., the current epel-release). EDIT: Of course I would always prefer the installation via yum but somehow I failed to get that specific package installed using this normal approach. As such, the EPEL/FAQ recommends Version 2. As I'm downloading the package through an insecure channel (http) I wanted to make sure that the integrity of the file is verified using information that is not provided with the downloaded file itself. Is this especially true for all of these approaches? I've seen various approaches to this on the internet: Version 1 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 2 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 3 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm rpm --import https://fedoraproject.org/static/0608B895.txt rpm -K epel-release-6-7.noarch.rpm rpm -i epel-release-6-7.noarch.rpm I do not know rpm very well, so I wondered how they might differ? My guess (after reading the manpage) is that the first should only be used when the package is previously not installed, the second would additionally remove previous versions of the package after installation, the first two omit some verification steps before the actual installation that are done by rpm -K. So my main questions at this point are Are my guesses correct or am I missing something? Is the rpm --import ... implicitly done for the first two approaches as well, and if not, isn't it necessary to do so after all? Are these additional checks performed by rpm -K ... any relevant? What is the best (most secure, most reliable, most maintainable, ...) way of installing packages via rpm in general?

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • Searching for just files

    - by M Schenkel
    I have a couple questions about searching for files on Windows 7. I find the XP method much easier than this new Windows 7 search. Note: I am only concerned about finding files matching a search term, not ALL files containing the search term. Is there a way to search just for files? When I use the search it seems to be searching "within" files and returning instances where the name of the file is used. Example: I have a whole web directory and want to find the javascript files. But if I enter "myjavascript.js" in the search, it also returns all the html files which reference the javascript file. This is both slow and difficult to actually find the reference to the file. Is there a way to search for an exact match? The search seems to implicitly use wildcards. For instance, say I have a bunch of files in a folder: file1.txt,file11.txt, file12.txt, file13.txt. If I enter "file1.txt" in the searcher it returns instances as if I were using a wild card file1*.txt I miss XP!!!!

    Read the article

  • SSH & SFTP: Should I assign one port to each user to facilitate bandwidth monitoring?

    - by BertS
    There is no easy way to track real-time per-user bandwidth usage for SSH and SFTP. I think assigning one port to each user may help. Idea of implementation Use case Bob, with UID 1001, shall connect on port 31001. Alice, with UID 1002, shall connect on port 31002. John, with UID 1003, shall connect on port 31003. (I do not want to lauch several sshd instances as proposed in question 247291.) 1. Setup for SFTP: In /etc/ssh/sshd_config: Port 31001 Port 31002 Port 31003 Subsystem sftp /usr/bin/sftp-wrapper.sh The file sftp-wrapper.sh starts the sftp server only if the port is the correct one: #!/bin/sh mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -eq $current_port ] then exec /usr/lib/openssh/sftp-server fi 2. Additional setup for SSH: A few lines in /etc/profile prevents the user from connecting on the wrong port: if [ -n "$SSH_CONNECTION" ] then mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -ne $current_port ] then echo "Please connect on port $mandatory_port." exit 1 fi fi Benefits Now it should be easy to monitor per-user bandwidth usage. A Rrdtool-based application could produce charts like this: I know this won't be a perfect calculation of the bandwidth usage: for example, if somebody launches a bruteforce attack on port 31001, there will be a lot of traffic on this port although not from Bob. But this is not a problem to me: I do not need an exact computation of per-user bandwidth usage, but an indicator that is approximately correct in standard situations. Questions Is the idea of assigning one port for each user is a good one? Is the proposed setup an reliable one? If I have to open dozens of ports for many users, should I expect a performance drawback? Do you know a rrdtool-based application which could make the chart above?

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • Windows 7 pc freezes for an indeterminate amount of time after unlocking

    - by pikes
    Not sure if this type of question is appropriate for this forum, but I've tried everything I can think of to solve this problem aside from format/reinstall. I recently got a new work PC (Dell optiplex 755) with windows 7 professional x64. Standard developer software installed for .net development: VS2008, VS2005, SQL management studio, office 2007, etc. Recently I've been having this weird problem where after I lock my pc, when I try to unlock it, the screen will be black for awhile after unlocking. I can ctl+alt+del and put my password in but then it just goes black. The amount of time on the black screen seems to be related to the amount of time I am away from my PC. If only away a few minutes, it'll take about a minute to get to the desktop. If away for an hour, could take up to 15 minutes. If I lock it and go home for the night, I have to restart my PC in the morning (I've let it sit for an hour after a night of being locked and nothing happened). It doesn't do it every time but definitely the majority of the time. One weird thing I've seen is that if I remote into my machine before trying to log back in it does not do it. I uninstalled all software back to the point when I remember it started happening and it still does it. I was using this PC for a few weeks without this problem happening at all. Anyone know what my next troubleshooting steps could be? My IT department tried to fix it by moving my old profile to another disk and having me log in, effectively recreating a profile from scratch but that didn't solve it. As I said above if this isn't the right forum for these types of questions please let me know. Thanks in advance!

    Read the article

  • SSL Mail server connection times out on send()

    - by Jivan
    When trying to programmatically send an email from a website of mine, with PHP Pear Mail package with SSL connection, PEAR:Mail replies the following : Failed to connect to example.blabla.net:PORT [SMTP: Failed to connect socket: connection timed out (code: -1, response: )] I looked for similar questions on SO and SF, all the answers asking the OP to test a request on telnet or ssh in command line. So, that is what I did and here is what happens : $ ssh -l myusername -p PORT example.blablabla.net _ Here, '_' in the second line means that NOTHING happens. Indefinitely, which seems coherent with the timeout message I had from PEAR:Mail. So PEAR:Mail seems out of cause. But, what I have to tell you is that yesterday, it just worked. Connection was properly established, mails were properly sent, etc. Just today, it doesn't work anymore and I absolutely don't know why. I restarted Apache (in case an extension was broken), restarted mail services, etc. Still. No effect. Before yesterday (when it worked) and today (when it doesn't anymore), I just didn't touch the server and did nothing on it, simply because I took a day off to write some blog post! Have anyone of you encountered similar problem ? The problem seems quite common, judging after some googling, but the solution doesn't. Thanks for any help ! (note on config : CentOS 6.4 x86_64 with cPanel/WHM)

    Read the article

  • How to get Subversion repository from svn:// and https://?

    - by Hikari
    I know these are noob questions, but I never got my own Subversion running before and I'm kinda lost. I installed VisualSVN in Windows, but it doesn't support svn:// protocol by default, only HTTP or HTTPS. It is working fine over HTTP, and I'm able to manage it from its management tool, see its repositories and get their HTTP-based URL, and from that I'm able to use Tortoise to check out and check in. I'm able to check out from a repository URL using Tortoise: http://Main:90/svn/HikariKrumo/ But I need svn:// protocol for Redmine to access it. Redmine says to support http:// but it reports this error message: The entry or revision was not found in the repository.. And I need HTTPS to access it from Internet. If I can get Redmine to access it from svn:// I can just configure it to use HTTPS in place of HTTP, and I hope it all to works. I like VisualSVN because of its management tool, but I can use another Subversion distro if needed, as long as it supports svn:// and https://. I'm getting crazy on it because it should be simple but I can't get it to work.

    Read the article

  • Virtual Server HDD shrinks without apparent reason

    - by Christian
    We have a virtual hosted Linux server, and in the last few months every now and then the HDD shrinks from 400GB down to the exact byte count that is in use. All existing data can be downloaded and displayed without a problem, but we can't upload or edit any files because of the "full" hard drive. Here is a screenshot, where "size" should be 400GB: This has happened twice before, and again today. The last times, when I reported the issue to the host, they said "that isn't possible, you must be doing it wrong", but soon after the call, the problem vanished without us doing anything, so I suppose that they have some kind of problem they're not willing to admit. Even after the fact, they acted like nothing was wrong and wrote me a mail in which they explained that I can use "df -h" to view available disk space (well duh, how do you think I noticed this particular issue?). Questions about if and what they had done were ignored. It has happened around the 25th to 28th of the month, so I suspect that they might have a cronjob running every 30 days or so which wreaks havoc with some VM configs. I just want to understand the problem, but the host support hasn't been very helpful in that regard. I have tried Googling the issue, but any combination of search terms I can come up with just gives me tutorials on how to change HDD size in a virtual machine. a) What could be the cause of shrinking HDD size in a Ubuntu 12.04.3 LTS server? Could there be anything in our virtual machine or is it more likely to be an issue with the vm host? b) Can I do anything about it without needing to contact the host's support? c) Is there anyway I can prevent this from happening at all?

    Read the article

  • Can't login after upgrading to Windows 8.1

    - by flatline
    This afternoon, I upgraded my work laptop from Windows 8 to Windows 8.1. I had previously had a local account, but after the upgrade, it prompted me to enter my windows account credentials, which I had set up beforehand at some point. I entered my password and clicked next, went through another screen or two, grew tired with the process, and clicked whatever the equivalent button to "skip this step" that I was presented with. Now I can't log in. Not with my (previous) local account password, and not with my windows account password. It's a Dell with biometric identification, which I had set up previously, so I put my finger on the reader and it complained that I couldn't use that fingerprint because I had changed my password. But, I hadn't wittingly changed my password at all. I assume that what happened is that, by entering my credentials, my local account was tied to the Windows account, but because I cancelled the process partway through, something went wrong and I cannot log in. A few questions: 1) How do I log in with my windows account credentials? Should LOCALMACHINENAME\username, which was my previous login method, still work for the Windows account? When I booted to safemode it prompted me with WindowsAccount\myemailaddress, which allowed me to login there, but the regular login doesn't accept the '@' symbol. 2) Is there any way to make that account local-only again? I can't find any way of doing it. 3) I managed to enable the local administrator account and get back into the box; failing all else, is there a quick way to migrate my old profile over to a new user?

    Read the article

  • Force Installing a Radeon HD 2100 on Windows 8

    - by Click Ok
    I'm trying force installing a Radeon HD 2100 on Windows 8. I've found that link from AMD with the drivers for Windows7: http://support.amd.com/us/gpudownload/windows/legacy/Pages/legacy-radeonaiw-vista64.aspx I know too that AMD will stop support Radeon HD 400 and older: http://www.techspot.com/news/48321-amd-drops-windows-8-support-for-radeon-hd-4000-and-older.html Now, let's go to the problem. If I try install the 12.6 driver, Windows will stick with the "basic display adapter", and this is bad for 3d games like Minecraft, that runs really slow now compared with the previous Windows7 installation. Forcing install the catalyst driver can help to fix it. So, I follow that steps: Extract the Catalist Driver (C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql) Right click the "basic display adapter" on device manager, and "update driver" Search on PC I will choose the driver "With Disk" "C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql\Packages\Drivers\Display\W76A_INF" There is a big list of drivers and the nearest driver to HD 2100 is "Radeon HD 2350 Series" My questions: Why isn't "Radeon HD 2100 Series" listed? (or Where is it listed?) In theory it must be listed" The first link above show that "This article applies to the following configuration(s):" (...) "AMD Radeon HD 2000 Series" Am I doing something wrong?

    Read the article

  • Problems in Table of Contents formatting

    - by ChrisW
    Two questions about captions in Word (they are related, hence the same post): Using Word 2010 (and its inbuilt equation editor) I've got figure captions which contain equations (well, actually, they represent chemical equations, such as nitrate, for which the correct representation is NO3- where the 3 is subscript and the - is superscript, but in the same column). However, when I generate a figure list, the equation displays as NO3- (with no subscript or superscript) - Word knows it's an equation though (the Equation Tools design ribbon/tab is displayed when I click on the NO3-). I've tried changing it from Professional to Linear and similar other obvious options, but still can't get it to display correctly. File to show this problem in action: http://dl.dropbox.com/u/101867759/EqtnTest.docx - note how the (chemical) equation for nitrate is rendered correctly in the 'caption' on Page 2, but not in the ToC on page 1. I have another caption where the whole figure is included in my list of figures. When I double click on the caption in my text, the caption is highlighted (as expected), but so is the figure (this doesn't happen with any of my other figures) so I assume that the figure has been 'linked' in some way to the text - how do I remove this link?

    Read the article

  • System Issues and Major Malfuctions after Failed hibernation Exit

    - by Sarah Seguin
    I have a HP G71-340US that went into hibernation mode for a while and when I tried coming out of it, I got an error message: You're computer cannot come out if hibernation . Status: 0xc000009a Info: A fatal error occurred processing the restoration data. File: \hiberfil.sys Any information that was not saved before the computer went into hybernation will be lost enter=continue So I hit continue and it ran soooo super slow it. It was seriously crawling. Finally I gave up and turned it off manually (IE press and hold the button). It's been a week or two since then and EVERY SINGLE TIME I have tried to to do ANYTHING it takes forever. When I say forever, I literally mean takes 5-7 minutes to load the internet, then the page itself, then to click a link, so on so forth. Eventually everything just goes not responding and I have to give up (4-6 HOURS later). I also cannot access my thumb/jump drives once I've managed to load windows. I was going to try runing malware bytes incase of a virus, but it's windows explorer developes errors and goes not responding on me. Currently I'm running scan disk or check disk and like every file is coming back unreadable. I let it run the last 2 hours straight in chkdesk and I'm only at 6 percent with around 500+ errors and still going. Yes, I've taken logs of the errors via cell phone camera and patience. A week or two prior to this happening I had to change our the hard drive due to blunt force trama next to the mouse. OH! Running on Windows 7: ) And I've tried loading the computer in safe mode and it makes absolutely no difference. Any and all help would be appreciated. I really don't know what to do from here and I'm kind of freaking out. I've googled different part of the error and things that I've done/seen and there are so many different answers/topics that I thought it best to just post the questions.

    Read the article

  • Duplicate forwarded messages in Blackberry when using BIS

    - by Avery Payne
    Our Setup External email arrives at a Postfix server, is scanned, and then forwarded via settings in transport (using the RELAY:[{ip-address}] for a given address) to an Exchange 2007 server. Some users are on Exchange, but a few are still on the Postfix server (they will be moved in the near future). IMAPS is provided for external connections via Dovecot; in-house, IMAP is provided for the Gateway and native MAPI is used for Exchange/Outlook. Blackberries are connected via BIS, which uses Dovecot as a reverse-proxy IMAPS service to connect to Exchange (when the mailbox exists on Exchange, otherwise it connects to the mailbox on the gateway). The Issue We have a user that, when they forward an email on their Outlook client, they get a duplicate of the original message on their Blackberry. When I say duplicate, I mean that they have a copy of the forwarded version of the message (i.e. their version of the message that they obtained hitting the forward button), and a copy of the original message that shows up at the same time. The expected behavior is to just see the forwarded message, not the forwarded message and a 2nd copy of the original message. We've only seen this with Outlook users that also have a Blackberry. Other IMAP clients, such as OS X Mail or Thunderbird, do not exhibit this behavior when connecting to the Exchange server; forwarded messages work as expected. The Questions what is causing this to happen? why does it only affect Outlook/Blackberry setups, and not TBird/Blackberry or OSX-Mail/Blackberry? how do we get it to stop, before people go insane and never forward messages again?

    Read the article

  • Grub Installation Failed: Fatal Error ... now what I do?

    - by eklavya
    I know there are some threads that touch this but I feel I have done something uniquely stupid. hence the post and plea for help. I am a beginner @ Linux. So I have a PC with a HDD (hard disk drive) and SSD (solid state drive) It was running Linux Mint /dev/sda1 - HDD Partition 1 - 2 TB (mounted this is /home /dev/sda2 - HDD Partition 2 - 1 TB (separate back up drive, i was backing up files to this) /dev/sdb1 - SSD Partition 1 - 100 GB (OS) /dev/sdb2 - SSD Partition 2 - 20 GB (Swap) The operating system was Linux Mint and was installed on the /dev/sdb1 i.e the solid state drive. I had partitioned off the sda into 2 TB and 1TB and presented the 2 TB as the /home to the OS. Anyway last night I decided to make a return to Ubuntu via the path of Elementary OS. Everything went fine with the install until it stated that GRUB installed failed and this was a Fatal error (no kidding I said). No I am stuck. I have definitely done something wrong and don't know what it is... My biggest pain is the files on the /dev/sda2. I want to save these before I try something drastic like wiping off the /dev/sda completely. So I have the following questions... Can I use a liveCD USB to save these files ? I can see the /dev/sda2 but was unable to access the files in the liveCD last not least ... how do I fix the main issue here. Why could the OS not install GRUB 2b... why is my SSD the /dev/sdb ... and not /dev/sda. Does that have something to do with it that my master boot record sits on the HDD /dev/sda and not /dev/sdb

    Read the article

  • 4GB Memory Upgrade for Acer Aspire 5102WLMi

    - by Richard Slater
    I have bought a 4GB memory upgrade (2x 2GB PC2-5300 SODIMM) for my Acer Aspire 5102WLMi (Aspire 5100 Series) laptop, I installed the two memory modules correctly however with 4GB installed the laptop refuses to POST. I have tried the following: Tried both 2GB SODIMMs without the other (Worked Fine) Tried the original 512MB SODIMMs (Worked Fine) Tried with original 512MB SODIMM and new 2GB SODIMM (Worked Fine) Tried swapping over the 2GB SODIMs (Didn't Boot) Left the computer for 10 minutes with both 2GB SODIMMs installed (Didn't Boot) Checked latest BIOS installed (No Change) The Crucial website said that the laptop supported 4GB of RAM as do several other sites through found through Google, up until now I was fairly confident this would work. Couple of questions that would be good to have answered: Question: Has anyone got an Acer Aspire 5100 Series running with 4GB RAM? Answer: Yes, I have now got one working with 3.75GB Usable, the rest is occupied utilized by the Graphics Card. Question: Any tips on getting this to work; is there a CMOS reset switch? Answer: Yes there is, if both SODIMMs are removed two very small interlocking PCB tracks are revealed. If these are shorted together with a screwdriver the BIOS will be reset. Thanks.

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • How does everyone set up AWS for PHP with a git workflow while worrying about distributing EC2?

    - by Parris
    Hello, I have been looking for something like heroku but for php, and after much frustration (and almost finding what I need, but not quite) we decided to just go with AWS without any other abstraction. We are using PHP 5.3 (and CakePHP 1.3), and are currently using git. Ubuntu seems like the easiest way to get both of those on there and we will most likely use that. We aren't really going worry about outgoing email. We are using smtp through gmail, but will most likely switch to some other service eventually. I had 3 questions: 1) I have been looking at Zend Server, and I am not quite sure how that is more beneficial than xampp. Perhaps it is not? 2) I suppose to make the application scale we would need multiple instances of some ec2 ami. Then just duplicate it and such. The question then becomes how do we make sure all EC2 instances are up to date? 3) I understand the concept of load balancing to some degree. I understand that in 1 region you select a bunch of servers and have it load balance across them. The question then becomes well how about world wide? How do I make it so that traffic is directed to the correct ec2 server? I have heard of route 53, and tried signing up for that, but nothing appears in my control panel. Also perhaps it is just a DNS thing with my domain registrar? AHHH... some tutorial would be helpful!

    Read the article

  • please explain these mongo statistics

    - by sivann
    My setup: I have 2 hosts, and 2 shards each. Host1 has 2 shards, and is the master of the replicas host2 has the secondaries of the 2 shards. . host1: shard1 (repset1),shard2 (repset2) host2: shard1 (repset1),shard2 (repset2) There's also a 3rd host that acts as arbitrer. I have 50 threads writing randomly to both shards (using a hash) via mongos with REPLICA_SAFE WriteConcern set on each insert. The questions: mongostat displays about 90% locked for both shards in host1 and about 1% locked on host2. Since I use REPLICA_SAFE which supposedly writes to both servers shouldn't the locks be the same? mongostat reports qr=30 for both shards of host1, and qw=0 always. Since I perform only writes how is this possible? Moreover on host2 all queues are reported 0. Faults are abut the same in all shards/hosts (arround 80). netIn/netOut on the secondaries (host2) are always about 200bytes/sec. Too low. mongos has 53 connections, host1's shards have 71 and 71 and host2's shards have 9 and 8. How is this? Please answer whatever you can. Thanks!

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >