Search Results

Search found 5770 results on 231 pages for 'sense hofstede'.

Page 67/231 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Windows refuses to believe printer is online unless I delete and re-add it

    - by Marcin
    I have a Canon MP560. It is online, in the sense that (a) I can connect to its internal web server; and (b) if I delete the printer in windows (windows 7), and reinstall it, windows will recognise that the printer is online, and talk to it. In all other cases, the windows and mac computers (purchased last week, running whatever is the latest and greatest mac os) in my household will simply not recognise that the printer is online. The printer is statically configured to use the same IP address, so that's not an issue. Because the printer works just fine with windows after delete and re-add, I assume that the issue with the mac is that I haven't installed whatever drivers Macs need to talk to Canon printers over a network.

    Read the article

  • Which is generally considered faster or best practice: symlinks or Apache aliases?

    - by Christopher W. Allen-Poole
    I'm curious as to what most people's views are on this subject. Personally, I will almost always prefer symlinks unless I have no other option -- I find that it is far more obvious when someone is navigating the file system, but, on the other hand aliasing is more platform independent. Windows XP, for example, doesn't have anything remotely comparable to symlinks (NTFS junctions are not interpreted correctly by at least some environments), which means that anything which relies on symlinks in a *nix based system cannot be transferred. (I know that Windows 64x OS's have symlinks, but I've not seen if they can be read correctly by the environments previously mentioned) In addition to this, I was also wondering which is considered faster. Is this even possible to know? Do you have a conjecture? I would imagine that since symlinks are generally more low-level than Apache it would make sense that they would be referenced faster, but, on the other hand, I would guess that Apache is required to do a lookup in either case so it would be disk read dependent.

    Read the article

  • Adding a trigger command to autocomplete function in zsh

    - by mkaito
    When you define an alias like alias g=git, the shell will pick it up and run the corresponding autocomplete function. Now, there's a program out there called hub, which is basically a supserset of git, with some added, github-specific functionality. The recommended way to use hub is to alias git=hub. Of course, this won't trigger the autocomplete function for git, which makes sense. Now, if I wanted to have git's autocomplete trigger for hub, the only way I know of is editing /usr/share/zsh/functions/Completion/Unix/_git and adding hub in the first line as trigger. While this works, it isn't practical, since this file will get overwritten with the next zsh release. Assuming hub won't provide a zsh completion function any time soon, is there another way of adding hub to the trigger commands for git's autocomplete function?

    Read the article

  • remote desktop network failed connection

    - by tbischel
    I was trying to create a remote desktop connection from Windows XP to my Windows Vista Ultimate Addition machine at home. This normally works fine. Today after my connection was dropped, I tried to reconnect to my machine. It brings me to the normal startup screen, but when I tried to log in, it gave me the message "This network connection doesn't exist". This doesn't make much sense, as I have reached a Windows style login screen already. My connection returned later that day, but I'm curious as to what happened. Anyone see this before?

    Read the article

  • Recommendations for VMWare web server environment with load balancer.

    - by Ben
    We run IIS websites on a VMWare production server that pull image content and video content from a separate IIS instance on another server (media server). The media calls (images and video) are straight http:// calls and not using a streaming application. During peak traffic periods, we clone the production server five times and have a load balancer distribute traffic to all five production servers. The media server does not get ramped up. We noticed that the processing and resources on the media server gets very taxed during this period. Would it make sense to run the IIS instance for the media server locally on the production server and have it cloned with the production servers, then have a rule on the load balancer negotiating these media calls from the website? Would it be better to allocate more resources (memory and CPUs) to the media server VM and not clone it with the production servers? Recommendations are sincerely appreciated.

    Read the article

  • How can I compare Excel serial dates WITHOUT converting to mm/dd/yy type dates?

    - by dwwilson66
    I have a table that contains a number of values representing Excel serial dates. After a number of unsuccessful attempts to compare fields, my current approach is to do comparisons between serial dates instead of calendar dates. I am trying to summarize the data--by DAY--with formulae. CONSIDER: 41021 some data 41021.625 some data 41021.63542 some data 41022 some data 41022.26042 some data 41022.91667 some data 41023 some data 41023.375 some data DESIRED RESULT: 41021 sum of 41021, 41021.625 and 41021.63542 data 41022 sum of 41022, 41022.26042 and 41022.91667 data 41023 sum of 41023 and 41023.375 data In essence, for all instances of SerialDate.SerialTime, SUM data values associated with SerialDate.* regardless of the *.SerialTime for that date. While I can see how to do this by creating additional dates column formatted as =TEXT(<DateField>,"mm/dd/yyyy") I'm looking for a solution that will allow me to handle this 'conversion' in the formula, e.g.SUMIF((TEXT(<dateRange>,"yy/mm/dd"),=(TEXT(<dateField,"yy/mm/dd")),<dataRange> Make sense? Anyone have any ideas? Thanks

    Read the article

  • Nginx Restart Issues

    - by heavymark
    All of the sudden when restarting Nginx I get the following error: Restarting nginx: [alert]: could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) 2011/02/16 17:20:58 [warn] 23925#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 the configuration file /etc/nginx/nginx.conf syntax is ok 2011/02/16 17:20:58 [emerg] 23925#0: open() "/var/run/nginx.pid" failed (13: Permission denied) configuration file /etc/nginx/nginx.conf test failed On the front end part of the site loads but some files such as the CSS in particular are not loading. They exist on the server but when loading the resources directly in Chrome they say "Oops this page can't be found." I set a special group and user to run my apache files using suexec for my domain files. I think the nginx are owned by root however which I'm assuming is the problem but which nginx file ownerships would I change?

    Read the article

  • Command line audio library manager for Linux

    - by Ketil
    Hi all Hear is my set-up, I have a Linux server that is running Music Player Demon, all the audio files are under a dir (/muzik) which is exported by NFS. So to add files to the MPD database, I just drop the files into the /muzik NFS share and up date the MPD db, so far so good, but I would like to keep the dir strucher belowe /muzik in sum sort of order. To achieve this I am using Amarok, wich a start on my laptop and then use the organise files command to sort the files in into a sensible dir strucher based on the tags in the files. Do you know of any command line utility that can do the same thing that I am using Amarok for so I can run it from cron on the server and automate the process? I hope that this make sense.

    Read the article

  • MySQL replicate multiple places

    - by Frederik Nielsen
    Very trick task to find a good title for this question, but here goes the q: I have a few development machines, where I develop my PHP applications on, and testing via a local webserver. This works out pretty well for each machine. However, I would like to replicate the DB from my machines to a central location. So, to sum up: DEV1 - CENTRAL DEV2 - CENTRAL DEV3 - CENTRAL CENTRAL - DEV1 CENTRAL - DEV2 CENTRAL - DEV3 I hope this makes sense, as I cannot find an easy way to tell it. Basically, it is a 2-way replication, where all 4 databases contain the same info, and each of them can be updated locally, to then be pushed out to the others. Is this actually doable? All my dev machines are running Windows 7, and my central DB server is running CentOS 6.

    Read the article

  • Should I have a heroku worker dyno for poll a AWS SQS?

    - by Luccas
    Im confusing about where should I have a script polling an Aws Sqs inside a Rails application. If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance. And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it? The script code: queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue.poll(:idle_timeout => 20) do |msg| # code here end I need help!! Thanks

    Read the article

  • What's the difference between sudo su - postgres and sudo -u postgres?

    - by Craig Ringer
    PostgreSQL users peer authentication on unix sockets by default, where the unix user must be the same as the PostgreSQL user. So people frequently use su or sudo to become the postgres superuser. I often see people using constructs like: sudo su - postgres rather than sudo -u postgres -i and I'm wondering why. Similarly, I've seen: sudo su - postgres -c psql instead of sudo -u postgres psql Without the leading sudo the su versions would make some sense if you were on an old platform without sudo. But why on a less than prehisoric UNIX or Linux would you use sudo su ?

    Read the article

  • Creating CNAME to delegated domain

    - by Starsky
    We are trying to configure F5 to do load balancing on 4 sub domains similar to this article... http://support.f5.com/kb/en-us/solutions/public/0000/200/sol277.html For Example prod.wip.example.com. NS F5NS1.example.com. prod.wip.example.com. NS F5NS2.example.com. test.wip.example.com. NS F5NS1.example.com. test.wip.example.com. NS F5NS2.example.com. Then we want to make cnames instead of delegating individual sub-domains, e.g. myapp.example.com CNAME prod.wip.example.com Microsoft DNS gives an error when I attempt to make the CNAME... dnscmd ns1 /recordadd example.com myapp CNAME myapp.prod.wip.example.com. Command failed: DNS_ERROR_NOT_ALLOWED_UNDER_DELEGATION 9563 (0000255b) The error makes perfect sense, but does anyone know of a way around it? Or are my NS records incorrect for this setup? Thank you, -Steve

    Read the article

  • AFP / Apple Filling Protocol aka Netatalk access over Internet

    - by PJJ
    I got a simple cloud server and thought it would be nice to have mac native afp Volumes accesss. Installed Netatalk and this seems to work pretty nice. No sensitive data or something but I don't like to wake up someday and have my www docs rm-rfed by some kid h4x0r. Q1: Is afp encrypted? Q2: How can I make it (semi)secure? Q3: Does VPN makes sense for this? Q4: What would you do to get afp working over net? Opening any service meant for Lan only is a basic flaw, i know - but me be ignorant about it. According to Apple Dev only the authentication is encrypted or am I mssing something?

    Read the article

  • iSCSI: LUNs per target?

    - by badnews
    My question relates specifically to ZFS/COMSTAR but I assume is generally applicable to any iSCSI system: Should one prefer to create a target for every LUN that you want to expose? Or is it good practise to have a single target with multiple LUNs? Does either approach have a performance impact? And is there some crossover point where the other approach makes sense? The use case is for VM disks, where each disk (zvol) is a LUN. So far we have created a a separate target for each VM; but a single target that contains all the LUNs would probably greatly simplify management... but we may need hundreds of LUNs per a single target. (And then possibly tens of initiator connections to that target)

    Read the article

  • information about /proc/pid/sched

    - by redeye
    Not sure this is the right place for this question, but here goes: I'm trying to make some sense of the /proc/pid/sched and /proc/pid/task/tid/sched files for a highly threaded server process, however I was not able to find a good explanation of how to interpret this file ( just a few bits here: http://knol.google.com/k/linux-performance-tuning-and-measurement# ) . I assume this entry in procfs is related to newer versions of the kernel that run with the CFS scheduler? CentOS distro running on a 2.6.24.7-149.el5rt kernel version with preempt rt patch. Any thoughts?

    Read the article

  • Make puppet agent restart itself

    - by SamKrieg
    I've got a file that notifies the puppet agent. In the network module, the proxy settings are included in the .gemrc file like this: file { "/root/.gemrc": content => "http_proxy: $http_proxy\n", notify => Service['puppet'], } The problem is that puppet stops and does not restart. Aug 31 12:05:13 snch7log01 puppet-agent[1117]: (/Stage[main]/Network/File[/root/.gemrc]/content) content changed '{md5}2b00042f7481c7b056c4b410d28f33cf' to '{md5}60b725f10c9c85c70d97880dfe8191b3' Aug 31 12:05:13 snch7log01 puppet-agent[1117]: Caught TERM; calling stop I assume the code does something like /etc/init.d/puppet stop && /etc/init.d/puppet start Since puppet is not running, it cannot start itself... it kind of makes sense. How to make puppet restart itself when this file changes? Note that this file may not exist as well.

    Read the article

  • Does scheduling recycling app pool in IIS7 help the server conserve memory better?

    - by user29266
    Hello, I have a VPS (IIS7 with Win 2008) It's got: 40 websites and a SQL Server 2008 powering them with only 2 Gigs of RAM. None of the sites are mission critical, they are all just demos. I often have ram issues on the server because each site has does caching and generally uses a lot of memory. Would it make sense to set the application pools to recycle every 3 hours? I'm sure this would free up any memory leaks or processes left "hanging" Are there any other tips on this? Thank you very much!, Aron

    Read the article

  • Should I Upgrade My Old Wireless Router?

    - by lyngbym
    I have an old wireless router, and I mean stone age old (5 years). There is nothing wrong technically with the router, it serves my wireless needs at home but it is really darn old. A search on Belkin's site for F5D7230-4 actually turns up a different old model so I scrounged up this old review for you to get a sense of what I'm running: http://www.pcmag.com/article2/0,2817,1572451,00.asp. Is there a valid security reason to replace this router in 2009? Google searches have turned up a few security threats to it and Belkin hasn't released new firmeware in years for it. I am starting to think I should replace it mainly because its NAT is about the only thing protecting me from the outside world. Buying a new wireless router is a boring way to spend money since it just sits on a shelf doing its job. Thoughts?

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

  • options' default values of autoconf ./configure scripts

    - by hamstergene
    Having run ./configure --help there is a list of options that can be tweaked in the future build, for example --enable-luajit Enable LuaJit Support --disable-env clearing/setting of ENV vars Though any option can be used with enable and disable prefix, some are presented as --enable-me and other as --disable-me in the help output. Is this supposed to hint me of default values, and if yes, how do I to figure them out? Because either way makes sense to me: luajit is disabled by default and therefore it is presented as --enable-luajit so I can enable it by conveniently copy-pasting it from help output to command line. being listed with --enable in help output indicates that luajit is enabled by default.

    Read the article

  • php.ini date.timezone usefulness?

    - by Buttle Butkus
    I'm not sure if this is a question for serverfault or stackoverflow but it seems like it has a lot to do with server config. We have a server in chicago and the server's clock is on chicago time. But since the business is located in California, it would seem to make sense to use pacific time. What happens when server time is Chicago, and php.ini directive date.timezone is set to "America/Los_Angeles"? How will that affect logs written to mysql, error logs, etc? I've looked at the Apache error log and, as I expected, the php directive does not affect it. Times are all servertime. Thanks.

    Read the article

  • I'm looking for an online ASP.NET tutor.

    - by pkiyan
    $15/hr. I know it's not much but... Hi. I'm looking for an ASP.NET tutor. I want to use a remote desktop application so we can see each others screens and use Skype or phone to communicate with. You won't need to come up with any lessons or anything like that. I was thinking we could spend an hour or two each time we logged in to build a decent sized website from scratch. That's basically it. I'm a beginner with about 2 months experience with ASP.NET so we won't have to start from the very beginning, but pretty close. I wanted this site to have a little complexity to it and not just a website for beginners, but something I could study for a while. I'll pay you through PayPal or some other method if you prefer. By the way, it doesn't have to be a website that we work on together. I'll listen to other suggestions too. Maybe we could use an open source site/app to walk-through and study and modify. I've looked at 'My Web Pages Starter Kit 1.30', 'SubText 2.1.2', 'nopCommerce 1.5', and some others. They were all beyond me, and I couldn't make sense of any of the source code. But if you use and are really familiar with an open source app/site that I can download, we could study that. Here are some technical specs about the site I'd like to build/study: ASP.NET 2.0+ (preferably 3.5+, but I don't really care) C# / VB.NET ( don't really care, I suck at both. This is more about ASP.NET and helping me understand the structure of an ASP.NET website and the .NET framework in general. ) SQL Server ( I have SQL Server 2008 express and would someday like to learn how to use this thing. ) JavaScript / AJAX ( at least some use of this ) XML ( basically, I'd like to spend some time in the web.config file, and have some sense of what's going on in there. ) ASP.NET Folders ( I'd like to work with all of the ASP.NET folders if possible: App_Code, App_GlobalResources, etc.. and understand what does/doesn't go in them. Hopefully we can build more than one theme too. ) Assemblies ( how do you create a .dll and use it across different websites? maybe you could suggest a third party .dll that we could use ) Web Service ( I read about this once but didn't really get it ) I can't think of anything else but the above will definitely keep me busy. Hopefully we could make use of a lot of the server controls too (the nav controls gave me a headache when I tried customizing them). Is someone willing to help? I'll pay through PayPal 15 bucks an hour. I live in the Dallas, Texas (US) area so we'd have to synchronize time zones and agree on a day(s)/time of the week. I prefer working at night and on the weekends because I work during the week but whatever your schedule allows too. If you'd like to help me, can you post: years of experience with ASP.NET, your Time zone and time you're available and any ideas you might have about how you'd like to tutor? THANK YOU.

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    Hi, I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS drive. When I write a file to the device using eSata, or a SMB share I cannot write files over 2GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • X11 for apache user

    - by fuenfundachtzig
    We are using inkscape to convert SVG images uploaded to our server via a web form. For this inkscape offers a batch mode via the -z option, but this batch mode has a flaw: When inkscape is run by the apache user, it breaks saying $ inkscape -z -W drawing.svg X11 connection rejected because of wrong authentication. The application 'inkscape' lost its connection to the display localhost:11.0; most likely the X server was shut down or you killed/destroyed the application. If you do the same as a normal user you also get errors: Xlib: connection to "localhost:11.0" refused by server Xlib: PuTTY X11 proxy: MIT-MAGIC-COOKIE-1 data did not match (inkscape:24050): Gdk-CRITICAL **: gdk_display_list_devices: assertion `GDK_IS_DISPLAY (display)' failed 301.27942 But at least inkscape gives the correct answer (here the number stating the width of the image). Does somebody know how to make this also work for the apache user? Does it make sense to authorize apache to use X (if so how)? In any case it doesn't feel like the right solution...

    Read the article

  • Windows Firewall failing after 9-12 hours?

    - by routeNpingme
    I have 2 VM servers in the exact same NIC configuration: Server 2003 R2, one NIC connected to private (hardware firewall) network in a 10.x private address space, and one NIC connected straight to public internet. Windows Firewall is enabled for the Public Internet NIC only. Now, what doesn't make sense - this fails generally after 9-12 hours. It's not exact, but once or twice a day, traffic will just stop on the Internet NIC. No event log entries when it happens, and restarting the Windows Firewall service as well as stopping or restarting IPSec Services (just for fun) has no effect. Once the server is rebooted, everything is fine again for another 1/2 day. Any suggestions?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >