Search Results

Search found 6931 results on 278 pages for 'almost surely'.

Page 194/278 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Using Truecrypt to secure mySQL database, any pitfalls?

    - by Saul
    The objective is to secure my database data from server theft, i.e. the server is at a business office location with normal premises lock and burglar alarm, but because the data is personal healthcare data I want to ensure that if the server was stolen the data would be unavailable as encrypted. I'm exploring installing mySQL on a mounted Truecrypt encrypted volume. It all works fine, and when I power off, or just cruelly pull the plug the encrypted drive disappears. This seems a load easier than encrypting data to the database, and I understand that if there is a security hole in the web app , or a user gets physical access to a plugged in server the data is compromised, but as a sanity check , is there any good reason not to do this? @James I'm thinking in a theft scenario, its not going to be powered down nicely and so is likely to crash any DB transactions running. But then if someone steals the server I'm going to need to rely on my off site backup anyway. @tomjedrz, its kind of all sensitive, individual personal and address details linked to medical referrals/records. Would be as bad in our field as losing credit card data, but means that almost everything in the database would need encryption... so figured better to run the whole DB in an encrypted partition. If encrypt data in the tables there's got to be a key somewhere on the server I'm presuming, which seems more of a risk if the box walks. At the moment the app is configured to drop a dump of data (weekly full and then deltas only hourly using rdiff) into a directory also on the Truecrypt disk. I have an off site box running WS_FTP Pro scheduled to connect by FTPs and synch down the backup, again into a Truecrypt mounted partition.

    Read the article

  • computer fails to boot during/after POST for five or six boots, then works

    - by N13
    For the last few days, my computer has had issues booting. I've seen two different behaviors: The screen displays the graphics card information, then begins to list the RAM, hard drives, etc. At different points in this process (after the graphics info), the computer shuts off. After five or six attempts, it then boots normally. In roughly the same time frame, the computer freezes, and fails to boot. I think it boots successfully on the next attempt. I've also noticed that in some instances, the computer freezes on shutdown. It gets right to the point where it should shut off, but doesn't. I recently combined the best parts of two different machines into this one. I'm booting to GRUB, with Ubuntu 12.04, Linux Mint 11 and Windows Vista (unfortunately) as my OS options. It has an Enermax Modu82+ 525W power supply, and I've used an online calculator to determine that my load shouldn't exceed 400W. I even unplugged a hard drive, but that didn't help. I found the latest BIOS, patched it and checked the settings, but that didn't fix it. I'm fairly certain this issue didn't exist at first, but might have started when the power at my new apartment dropped for a second. The machine is plugged into a surge protector strip, but it's old and I've heard they lose effectiveness with age. Is a power dip as damaging as a spike? If something were damaged, why would it boot successfully after five or six attempts? It's almost like the BIOS or PSU need to be primed. The trouble with debugging is that there seems to be a "grace period" after shutdown where the issue doesn't present itself again. What should I try next?

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • DSL Connection drops

    - by user60024
    Ok, I just moved so I had to switch from Cable to DSL. I know very little about computers or internet connections and such, so I had AT&T come out to the new house to set up their highest speed. When they got here, they told me that I needed to downgrade to 3.5mbps because I was too far away. Well we did and everything was going great for two days until I started experiencing random disconnects which have been happening now for about 2 to 3 weeks. I am using a N300 Wireless Dual Band ADSL2+ Modem Router and my ethernet cable is hooked directly into my computer from it. I recently started to notice that it disconnects around 5:30 and 8:30, which may be because a lot of people are on their computers(?) and that it works perfectly fine, almost, all the time if I'm not playing a game. During this time, when I try to load up World of Warcraft the Internet light disappears and the DSL light begins blinking. (So maybe it's too much for the modem and it resets?) Other than that it is amazing, but I'd like to try and fix some of these problems. If you need more information, let me know on how to get it for you and what to do. Thanks for the help!

    Read the article

  • Windows 7 explorer crashing trying to read external hard disk

    - by Mario De Schaepmeester
    I have a 1TB Western Digital hard drive which is almost full and last time I tried to plug it into my laptop, I got a Windows dialog saying "this hard drive needs to be formatted". I did not panic because I have experienced things like this before and I know it's often solved by simply re-inserting the drive. Now however, whenever I plug it in and try to browse it in explorer by going to "computer", the explorer process crashes after a while. I simply close explorer since it takes ages trying to read the disk and nothing happens. After searching on the internet, the best thing to do would be a chkdsk. I tried it via properties in explorer (which also took a good 5 minutes to open up), locks up as well, after waiting a couple of minutes it says there's no access to the disk so a chkdsk is not possible... I want to make clear that I always use safe removal before pulling out the USB cable. Last time however, safe removal just would not work and when trying to shut down Windows, the logoff screen just would not disappear (I've waited at least 10 minutes or so) and I powered off the PC by force. This may be the cause of the problems but the disk was still recognised immediately after that. I really don't want to format this thing because it contains C: clones of 3 computers and a lot of other stuff that I don't want to re-copy. What would be the best course of action? Update I got chkdsk working via the command line. I used the /F and /R options. I already got a bunch of lines saying "file record segment X is unreadable" or whatever it is in English, my OS is Dutch. It looks bad... Will chdsk repair these errors?

    Read the article

  • Alternative method of viewing a database diagram in SQL Server to see what tables have gone missing?

    - by Triynko
    I have a database diagram for my database, but when I open it in SQL Server, I almost immediately get a message saying some permissions changed or tables in the diagram were dropped or renamed, and tables in the diagram vanish before I can even scroll over to see what or where they were. Basically, it's saying, "Hey, you know all that time you spent laying out tables in this diagram... half of them are going to vanish when you view it, and I'm not going to tell you which tables vanished or where they were in the diagram. You're just going to see a bunch of random empty spaces where tables used to be ;)" Ridiculous. So I thought that maybe if I look in the dbo.sysdiagrams table, I could look at some plain text definition of the diagram to get a clue about the names of the tables that went missing (because thier names were probably only changed slightly) or their coordinates in the diagram (because their spatial location would give me a clue as to what they were), so that I could re-add them, but I can't, because it's a binary definition. So, is there some other program I could use to view the existing database diagram that's not going to just drop and forget the missing tables without telling me what they were, or is this information lost and at the mercy of some SSMS-proprietary database diagram format and viewer which refuses to cooperate with me.

    Read the article

  • Is the Windows VPN secure?

    - by Tor Haugen
    I have used a few VPN solutions over the years. Most are hard to set up, slow to connect and/or rather ill-behaved (replacing system drivers, disrupting each other etc). One solution I have never used earlier is the one built into Windows. This is mostly because the infrastructure guys always refuse to use it because they claim it's 'not secure'. Now I have finally had the chance to use it (on Windows 7), and wow, it's a breeze! Easy to set up, well-behaved, it connects almost instantly, automatically authenticates with my logged-in credentials, and integrates excellently with the UI. I have to say, unless it really isn't secure, I'll be happy if I never have to use another VPN product ever again. I gather the Windows VPN used to rely on PPTP, which is not considered secure. But in Windows 7/2008, it supports L2TP/IPSec, SSTP and IKEv2, and authenticates with EAP or CHAP/CHAPv2. That seems pretty up-to-date to me. But I'm just a lowly developer. Can someone in the know give me the lowdown on this?

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Getting dwl-g122 to work on ubuntu

    - by User1
    I have a USB WiFi adapter, D-Link dwl-g122. I'm running Ubuntu 10.4. My laptop has a built-in wireless card that is connecting fine to the router. I plug in the usb and it never really connects. Here are some details: iwconfig wlan1 IEEE 802.11bg ESSID:"\x0B\xE1..." Mode:Managed Frequency:2.457 GHz Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on lshw -c network: *-network:1 description: Wireless interface physical id: 3 logical name: wlan1 serial: 00:13:46:8b:xx:xx capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bg dmesg [ 1096.814176] wlan1: direct probe to AP xxx (try 1) [ 1096.820960] wlan1: direct probe responded [ 1096.820969] wlan1: authenticate with AP xxx (try 1) [ 1096.823790] wlan1: authenticated [ 1096.823869] wlan1: associate with AP xxx (try 1) [ 1096.827667] wlan1: RX AssocResp from xxx (capab=0x411 status=0 aid=1) [ 1096.827674] wlan1: associated [ 1142.590912] wlan1: deauthenticating from xxx by local choice (reason=3) lsmod|rt2 rt2500usb 19643 0 rt2x00usb 11260 1 rt2500usb rt2x00lib 32133 2 rt2500usb,rt2x00usb mac80211 238896 3 ath5k,rt2x00usb,rt2x00lib cfg80211 148725 4 ath5k,ath,rt2x00lib,mac80211 led_class 3764 3 ath5k,rt2x00lib,sdhci It looks like the driver loads but it doesn't feel like connecting. The behavior is identical even if I blacklist the other wifi card (using an ath5k driver). It's almost like it is using the wrong password or something. Does anyone know what is happening? Is anyone using Ubuntu successful?

    Read the article

  • I can't connect to internet via lan cable because 2032 battery died and my bios bios info is now empty [closed]

    - by Rand Om Guy
    I have a compaq CQ61-112SL from about 5 years now... the main battery is almost dead, doesn't keep more then 10 minutes. anyway my problem is that my motherboard battery didn't have any more energy left a few days ago and since then I can't access internet through lan cable but only via wifi. I need cable though. I saw that on my BIOS setup page there were a bunch of parameters missing like serial number, UUID, product number and stuff like that. Also when I start the notebook it prints something like : No serial found. or something like that. I don't really know if the reason why my lan cable doesn't work is the empty BIOS but i assume that's it. If it's not please enlighten me. Or anyway tell me how to update the serial number and product number to the real ones (instead of the 0000000000000 that is now in my bios). I downloaded HP DMI which should make it possible to set these variables on the BIOS but i'm on Windows 8 64bit and the executable file that I need to open for my laptop model says it can't run on 64 bit.

    Read the article

  • Can I disable the message line when launching ``screen -RR``

    - by Jimm Chen
    screen -RR is great. It does one of the two thing automatically: If there is any detached screen session, it picks up one can attach to it. If there is no detached screen session(no session yet, or all have been attach to other terminal), it creates a new screen session automatically. I use Windows server Remote Desktop a lot, screen -RR behaves almost the same when a client connects to a remote desktop server. It is natural and I like it. However, when screen -RR determines it should create a new session, it displays a message line at terminal bottom for 5 second. I'd like to suppress this message line because it brings us little benefit. In my opinion, a remote user can always easily distinguish whether he is connected to a resumed session(a piled-up display) or a newly created session(a clean display) from what he sees in the terminal window. So, is there a way to suppress the nag "New screen..." ? Just suppress that very one, not suppress message line globally. My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • Windows 7 boot animation slows down startup by default?

    - by kngofwrld
    I just upgraded my HDD to an SSD drive. I am running a completely fresh install and enjoy the short boot time. I tweaked the startup to be as fast as I could by removing unneeded apps and such. Nor am I running a solid desktop background (which causes a 30-sec startup delay). I have a 2.1ghz 64 bit laptop with 4 gigs of ram, so it's not a liquid-cooled speed monster, but I checked some super high end PC boot vids on YouTube and noticed that they startup in almost the same time as my machine. I also noticed that the glowing Windows 7 animation plays all the way no matter how fast the PC is. I turned off the animation, and the startup time is unchanged. I turned on verbose startup info and noticed that it runs until the very end, where it looks like it just sits there for no reason waiting for something to happen for a few seconds. So now I think that the Windows 7 startup animation has a timer built into it that forces the computer to wait for no other reason than to play the full animation. Super-fast XP boot vids on YouTube seem to start much faster (and not just because they "have less to load"). Am I imagining things? My question is: How can I turn off not just the animation, but the timer for the animation. Here is a vid that tipped me off, I have no relation to the poster. (warning: soundtrack might be loud) http://www.youtube.com/watch?v=T5LkX3xejJ4

    Read the article

  • Screen randomly goes blue/black/white

    - by FubsyGamer
    Problem Randomly, while using my computer, the monitor goes dark grey/almost black, or it goes white with faint grey vertical lines, or it goes blue with black vertical lines. It's as if the computer powers off. People tell me I sign out of Skype, Spotify stops playing when it happens, etc. When I look at the tower, it doesn't seem like it's off at all. Nothing changes, fans are spinning, lights are on, etc. If you were only looking at the tower, you'd never know there was a problem The only way I can get it to come back up is to push and hold the power button and turn it off, then back on This never happens while I'm playing video games. I've done 5-6 hour sessions of League of Legends, and it doesn't do anything When I'm just browsing the web, reading email, checking Reddit, etc, it happens all the time. It can happen multiple times in a session, it usually takes only about 5 minutes from the time I start browsing to when the computer crashes This started happening after I moved to a new apartment (this has to be relevant somehow, it was not happening where I lived before) There is nothing in the crash logs or event logs System Specs i5 2500k CPU AMD Radeon 6800 GPU Gigabyte z68a-d3h-b3 motherboard WD VelociRaptor 1 TB HDD Screenshots Device manager About screen Things I have tried I was getting a WMI Error in my event logs, but I fixed it using Microsoft's fix, KB 2545227 I was using Windows 8. I wiped the HDD and downgraded to Windows 7 64 bit I took out the video card and used a can of air to totally clean out the video card, all fans, and the inside of the computer in general. I made sure all of the video card pins were fine, then reconnected it I tried to update my motherboard BIOS, but anything I downloaded from Gigabyte was only for 32 bit machines, not 64. I don't even know how to tell what my motherboard BIOS is at right now I am using a power strip, and anything else connected to it works just fine If I re-seat the monitor cable while this is happening, nothing changes Please, help me. I've been battling this for several weeks now, and it's so frustrating it makes me not even want to use the computer.

    Read the article

  • How to tell Linux to explicitly swap out main memory of suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • CPU usage always below 10% in windows server 2008 r2 x64

    - by ???
    I am using a server with windows server 2008 r2 running on it to run my program. The CPU of the server is Intel xeon x5570 2.93GHz with 2 processors, 8 cores per processer. However, I found that the cpu usage is almost always below 10% even I use 32 threads in my program. And I also found that sometimes the cpu usage could reach as high as 93% through the task manager when running my program and at that moment my program has processed over 1000 files per second while normally, it only processed over 50 files per second. However, this does not happen often. I use tools downloaded from the internet to make sure no core sleeps when the server is on, nothing changed. Also, I edited the windows register to make sure that I, as an administer, have no cpu usage limit. But it changed nothing. Is there anyway that I can make full use of my cpu? That is to say that each core runs a thread of my program and the total cpu usage could reach over 50% when I use a reasonable number of threads in my program. Did this happen to anyone of you? And could you help me with this ? Thank you!

    Read the article

  • Google Apps Domain Level Shared Contacts?

    - by dkirk
    My firm just switched to Google Apps Premiere addition 2 weeks ago and aside from the way Google handles shared contacts, things are going quite well. Previously, on our Exchange server we had numerous shared contact lists set up in the shared folders. We had a separate list for vendors, sales agents, etc.. Is there not a way to set up lists or groups such as this on the domain level in Google Apps? I have found a ton of forums with users asking the same question but no good answers unless you purchase some third party app in the marketplace. I have toyed around with the "google-shared-contacts-client" here: http://code.google.com/p/google-shared-contacts-client/ and this almost does it but it falls short when trying to group contacts on the domain level or when trying to search for a contact by company name. Are either of these things possible? I am now looking to create a Google Doc spreadsheet to share with the domain just to have a separated defined list of contacts that is search-able by various fields... Anyone who could shed some light on domain level contact sharing relating to the points above, I would be most grateful...

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • Viability of Mac OS X 10.9 Time Machine Server in office environment

    - by user197609
    Currently we have about 20 Mac OS 10.9 MacBook Pros (almost all with SSDs) backing up to individual USB drives. I'd like to consolidate these to one drobo thunderbolt drive array attached to a Mac Mini server (running 10.9 server) using time machine server. My question is, will this scale to 20 users? Examples I have seen seem to be 5 or 6 users tops, and this isn't easy for me to test (I'd rather not ask everyone to backup to the array and then switch back to USB drives if it brings our network to its knees). My primary concern is saturating our gigabit network, as time machine backs up every hour for every machine, so there would usually be a couple people backing up at any given time. We also have some people occasionally on our 802.11ac network and not on ethernet (usually connected via 802.11n until people upgrade to newer machines), but most of the time people are connected to our thunderbolt displays which have a gigabit ethernet connection on them. Our network topology is one 32 port gigabit switch with 5 smaller gigabit switches at each desk cluster. The mac mini server is connected directly to the top level switch. Update: Failing information from someone who has done this in practice, I suppose my question is really around how switches work. If three or four people are backing up simultaneously, and then other two (different) users transfer a file between each other, will they be able to transfer the file at gigabit speeds?

    Read the article

  • Periodic internet connection drops

    - by user9647
    My setup is a dsl modem, and a dlink di 524M router. I'm also using a Witopia VPN which runs through OpenVPN. I've been having trouble with the internet connection dropping very frequently. It comes back shortly, without even a router/modem/computer restart. This happens as frequently as every ten minutes. Occasionally (not often) it will last as long as an hour or two without dropping. When it drops, I can get it back almost immediately by clicking Reconnect in the OpenVPN GUI and letting that do it's thing. It's worth noting that I'm in China. Calling support is a bit difficult because of that. Also I don't really understand all of the router's software, although I've got it generally figured out. I've tried a bunch of stuff, attempts to diagnose and/or fix the problem. No success with any of the following: I've power cycled both the modem and the router. I've tried an ethernet connection to the router. I've connected without the VPN. I've disabled IEEE authentication on all connections. I've checked for viruses. I've tried lifting it off the ground so as to prevent overheating.

    Read the article

  • SBS 2008 Sharepoint Database good Memory Limit.

    - by ldelgado
    I manage a small network running on Small Business Server 2008. Lately, the Sharepoint embedded database is getting out of control with its memory usage. I've got a total of 16 GB of RAM on this server, and the Sharepoint database sometimes uses almost 8 GB of RAM. This never happened before, and it started happening after I installed Backup Exec 2010. It happens after a backup is performed. So I suspect there is a memory leak involved. I am working on that issue, but this question isn't about that. I would like to limit the amount of memory the Embedded database uses. I know how to do it. My question is, what would be the ideal amount of memory that I can allocate to Sharepoint? There are only 4 users on my network. One of the users uses two computers but not at the same time. They use sharepoint for a company calendar, and sometimes they share files that way also. Let me know if you need to know anything else. Thanks,

    Read the article

  • Windows 8 install app for multiple user accounts

    - by Robert Graves
    I purchased Adera episode 2 intending to play through it with my son. We each have our own user account on the same PC. When my son logged in, he was prompted to purchase the app which I had already purchased, installed, and played on the same PC. So I checked the Terms of Use. After selecting an app in the store, there is a Terms of Use link on the left side under the Install button. It is almost impossible to identify it as a link unless you put your mouse over it. The Terms of Use are standard across all apps in the store, not specific to particular apps. The terms of use indicates that the app may be installed on up to five devices, but says nothing about multiple user accounts on those devices. However, this Microsoft blog article indicates that it is allowed. Say, for example, that your family has a shared PC. You have previously used your Microsoft account to purchase a game that all your kids like to play. You can install it for each of your kids by having each of them sign in to their Windows accounts on the shared PC, then launch the Store and sign in to the Store using your own Microsoft account. There, you’ll see all your apps and you can re-install the app on your kid’s Windows account. Installing apps on multiple user accounts on a shared PC still only counts as one of the five allowable PCs where you can install apps. So I have two questions: Is it permissible under the Terms of Use to install the app under multiple accounts on the same device? If so, how do I do so given that my son has already signed into the store using his own Microsoft account.

    Read the article

  • DVI output _only_ working on Windows, not during booting or on Linux

    - by Mononofu
    So yesterday I booted my laptop up and the external monitor I have it connected to just stayed black. At first, I thought the problem would go away when Ubuntu was loaded, but it didn't. I tried to reboot a few times, to no avail. Then I decided to give Windows 7 a try, and suddenly (at the login-screen), my external monitor turned on and worked like normal. I have connected the monitor via DVI, and this only seems to work with Windows now. I don't even get a signal in my BIOS! Mind you, everything was working fine before that, and I didn't change a single thing. I then tried to connect the monitor via VGA (from my DVI jack, which can output VGA using an adaptor), and it worked again. However, 1920x1200 using VGA looks like crap - black print on white background is basically illegible. Do you have any ideas how to fix this peculiar problem? I only use windows for gaming, so it's no real help that it still works normally. Please also excuse any spelling mistakes, I am practically typing this blindly. Edit: I only have one graphics card in my laptop, and I can't select anything related to that in my BIOS. In fact, I can pretty much do almost nothing there. My laptop is a Nexoc Osiris E703, graphics gard is a GeForce Go 7900 GTX. As I mentioned before, DVI output during booting and on Ubuntu was working fine for years before yesterday!

    Read the article

  • ubuntu suspend works, but then immediately starts

    - by Yoav Aner
    Having a strange problem after upgrading from Ubuntu 11.04 to 12.04. Previously I could suspend just fine, the computer will switch itself off. Pressing the ON button will switch it on and it will resume. After upgrading to 12.04 however, when I suspend it does (almost) the same, turns itself off, but about 2 seconds later, the computer turns itself on again, and it goes back to life from suspend. I haven't changed any of the hardware or BIOS and it was working before just fine. Also tried every possible switch of pm-suspend ; setting acpi_sleep=nonvs in /etc/default/grub and also this suggestion but nothing seems to make a difference... UPDATE: just tested suspend using the 12.04 liveCD and it was working perfectly fine... but when I boot normally it doesn't. ANOTHER UPDATE: After re-installing I noticed that I can suspend. However, after restoring my home folder - the strange suspend problem happens again. I then created a new user account. When I login to the other account I can suspend without a problem... So This seems specific to my account only. How/What can cause this in my own user settings?

    Read the article

  • Window 7 image in vmware will allow network connection out but not http

    - by Ormis
    I am currently trying to create a set of images to deploy on my network, but I've run in to a snag. When I create my own Windows 7 image I can successfully use NAT for connecting to the network but whenever I try to access a webpage I get nothing. To be more specific, All firewalls/iptables are disabled on my host machine, my virtual machine, and my network. I can do lookups and all addresses respond correctly (i'm even using Google's DNS). On the host OS i have full connectivity. On the virtual machine I can ping any device I want and all addresses resolve correctly. Within a browser I cannot reach any page via hostname or IP. I feel almost like port 80 is being blocked but i can't find any reason this would be the case. If anyone has had this occur before, I would love some insight to the problem. I initially asked this on stackoverflow and now my eyes are now opened up to superuser. Thank you for any help you can provide.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >