Search Results

Search found 30912 results on 1237 pages for 'load path'.

Page 362/1237 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • Recent update killed unity 3d launcher

    - by Steve
    I am scratching my head on this one, a lot of things are still new to me. I updated 126 packages just now through the update manager, and upon reboot everything works fine except the unity launcher. It's just a dark space. The dash still works, as does the top panel and docky. When I try: unity --replace I end up with this and then an indefinite hang: (compiz:3689): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed WARN 2012-09-23 02:18:29 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubiquity-gtkui.desktop' WARN 2012-09-23 02:18:30 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubuntuone-installer.desktop' ERROR 2012-09-23 02:18:30 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported Initializing unityshell options...done WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-writer.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-calc.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-impress.desktop' is using a deprecated format for its actions that will be dropped soon. Setting Update "main_menu_key" Setting Update "run_key" Unfortunately I cannot make heads or tails of this. Anyone, please help?

    Read the article

  • Random users randomly being unable to connect to my static content domain

    - by jls33fsls
    I store all of my images, js, and css files on a separate domain to try and speed up page load times (it isn't a CDN, just a separate domain on the same server). This works fine for 99% of the users, 99% of the time. However, there are users that randomly are unable to connect to the static content domain for periods of 1-5 hours. They can go to the main site, but no images will load and everything is just white because no css is being loaded. If they go to the static content domain itself, the page just idles for a while and then times out with a blank white page, no error messages. I have no idea what could be causing this, and it hasn't happened to me, any ideas? I am running Apache on CentOS 5.5.

    Read the article

  • vpn/Openvpn as a cloud service

    - by 8pipe
    I am working on creating a small cloud (any number of EC2 instances that can be deployed based on load) implementing a VPN as a service for the company I'm working for. This is basically a project gathering together various vpn resources under one aegis as a cloud based service. As a user of openvpn, I'm somewhat familiar with being able to connect, but I'm looking for resources to start this project. Essentially I need to be able to: run a certificate authority and manage keys to distribute to coworkers build an ami that handles openvpn as a service balance the load if necessary among machines instances as needed Any suggestions for tutorials, things to avoid, roadblocks I might not be seeing from a novice perspective, etc. or just help in visualizing this is appreciated.

    Read the article

  • What would be better in my case - apache, nginx or lighttpd ?

    - by The Devil
    Hey everybody, I'm writing a php site that's expected to get about 200-300 concurrent users browsing it. When initializing the application will load about 30 php classes, some 10 maybe 15 images and a couple of css files. So my question is what else can I do (except optimizing my code and using apc/eaccelerator for php) to get as close as possible to those numbers of concurrent users ? Currently we haven't chosen a server for the site to be hosted on but most probably it'll be a VPS Dual core + 2 or maybe 4gb ram. Is it possible for such a server to handle that load ? Also how could I test it myself and be sure that it'll be able to handle it ? Thanks in advance, Me

    Read the article

  • Where to install boot loader on a Zenbook Prime?

    - by Christians
    I cannot figure out where to install the boot loader on my Zenbook UX31A Prime. I have installed Ubuntu many times on normal hard drives, but this is the first SSD and I am struggling. Installed Ubuntu 12.04 64-bit selecting "UEFI: general" boot entry. Installation type: Something Else Created partition /sda5 mount as /, /sda6 mount as /home, /sda7 mount as swap Selected /dev/sda for boot loader installation. Other options are /dev/sda, /dev/sda1/dev/sda3 Windows 7 (loader) ... Grub comes up with 6 entries Ubuntu - this runs great Linux 3.2.0-29-generic recovery mode: mode hangs with "fb: conflicting fb hw usae interdrnfb vs EFI VGA - removing generic adapter" memtest86:erro: unknown command `linux 16' memtest86 serial: unknown command `linux 16' Windows 7 (loader) (on /dev/sda3): invalid EFI file path Windows Recovery Environment (on /dev/sda8): unknown command drivemap, invalid EFI file path. My workaround for booting Windows 7 is hitting ESC during boot, windows boot manager comes up and * for booting into Windows 7 I select "WIndows Boot Manager (PO: SanDisk ....". * for booting into Ubuntu I select ubuntu (P0: SanDisk...) How can I boot into Windows from Grub?

    Read the article

  • Website Ethics / legal issues, image copyrights

    - by RailsN00b
    Ignoring the technical implementation of a website for a second, assume a website that is similar to twitter but with pictures. A user say something and puts a picture of whatever they said. As the nature of the internet, the images will most likely not be his/hers image. There are 2 options that I see for dealing with this: 1. The user will post a URL of the picture and the website will pull the picture from that URL everytime someone enters that page 2. The website will save the image in its own database of images and display the image to the visitors 'locally' The problem with option #1, while it saved storage, I see an issue with 'stealing' other websites bandwidth and if my website has many many visitors it could cost the image-hosting websites a lot and possible even crash it if the server can't handle the load. The problem with opion #2, while it saves the load to other websites, it practically takes pictures that could have copyright on them. Which option is better to implement in terms of legal issues and ethics? When do I need to contact another website to request permission to use the images from that site? Does anyone really care about that anymore. Where can I read about this?

    Read the article

  • Debian Linux server hangs after a week or so

    - by Alex Flo
    I have 2 Debian Linux 6.0.4 servers that have a strange behaviour: after 5-7-10 days they hang. By this I mean the servers need to be restarted and before that ping won't answer. I've been struggling with this problem for a couple of months now and here's some thoughts/what I tried without being able to solve the problem. I changed the RAM on a server. Being 2 different servers I doubt that it could be something related to hardware as a 3rd identical server won't have this problem. I logged the server load and when it crashes the load is fine (quite low) I cannot find anything in the server logs, logs are fine till the server freezes. I don't have access to console unfortunately. While I have years of admin experience I have never encountered such an issue and right now I have no idea where else to investigate. If you have an idea of what I could try in order to fix the problem please share it with me:-)

    Read the article

  • How to combine wildcards and spaces (quotes) in an Windows command?

    - by Jan Fabry
    I want to remove directories of the following format: C:\Program Files\FogBugz\Plugins\cache\[email protected]_NN NN is a number, so I want to use a wildcard (this is part of a post-build step in Visual Studio). The problem is that I need to combine quotes around the path name (for the space in Program Files) with a wildcard to match the end of the path. I already found out that rd is the remove command that accepts wildcards, but where do I put the quotes? I have tried no ending quote (works for dir), ...example.com*", ...example.com"*, ...example.com_??", ...cache\"[email protected]*, ...cache"\[email protected]*, but none of them work. (How many commands to remove a file/directory are there in Windows anyway? And why do they all differ in capabilities?)

    Read the article

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • hyperv machine guest loads slow

    - by Dani Avni
    this is by far one of the strangest things I have seen. I have a win 2008R2 cluster with a CSV. the CSV itself is on an iSCSI storage (hitachi HUS 110) basic config of the two hosts in the cluster is Dell R610 Win 2008 R2 with all patches 64GB 1 NIC for host access 2 NICs for guest access 2 NICs for iSCSI these machine work great and I can load a 2008R2 test guest machine on them in less than 90 seconds after the above config is running for over a year, I now need to add a new host. now the host is Dell R620 (Still intel but different CPU) Win 2008 R2 with all patches 64GB 1 NIC for host access 2 NICs for guest access 2 NICs for iSCSI I added this new host to the domain and to the cluster, I gave it access to the CSV and I tried loading the same guest machine that loads in 90 seconds in the other hosts. the machine loads in about 6 minutes. no matter how many times I try this the old hosts load the machine in about 90 seconds and this new host in around 6 minutes to eliminate any problems with the iSCSI connection, I added a new LUN and directly accessed it from the new host and I was working at around 300MB/s so no problem there. I also tested the connection between the other hosts and the new one and network is working fine there too. to eliminate problems in HyperV, I copied the machine to the local disk of the new host and it loaded in less than 20 seconds. now is the point were things get a lot stranger: in my tests I tried installing a fresh windows guest machine to the CSV from the new host. I noticed that while the fresh windows was installing, my test guest was loading in less than 90 seconds even on the new host (I repeated this a few times). If I paused the fresh install guest and tried loading the test guest again it loaded in 6 minutes. and again after I resumed the guest installation the test guest loaded fast. after the fresh windows was also loaded, I ran tests loading the fresh window and my test machine. each one of them loaded in about 5 minutes when I tried loading them separately. however when I started both of them in the same time they both loaded in around 2.5 minutes it seems that the iSCSI disk access is only working if it is under some load (although I never got to above 10% utilization according to the task manager) does anyone have any idea what could be the problem?

    Read the article

  • Windows Event Viewer - XML Custom Filter

    - by Frank
    <QueryList> <Query Id="0" Path="Application"> <Select Path="Application"> *[EventData[Data and (Data="Error")]] </Select> </Query> </QueryList> I believe the above XML custom filter would work if I wanted to check for Events where "Data" equals the word "Error". However, what I want to express is that I want the Events where Data CONTAINS the word "Error" . . . how do I express that? I've Goggled around, but I can find no references to Regular Expression like pattern matching in the Event Viewer. XPath has "contains", but if Event Viewer will support it, I cannot seem to figure out the syntax for invoking it.

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • The tale of how the PowerShell CmdLets got installed with Azure SDK 1.4

    - by Enrique Lima
    I installed the Azure SDK 1.4 while rebuilding my laptop and ran the installation for the Windows Azure Service Management PowerShell CmdLets. Kicked off the installation script for the WASM PowerShell CmdLets by locating the path to which WASM PowerShell CmdLets was deployed to. Double clicked the startHere command. It will then open the WASM installation dialog. Click Next. Click Next. Notice the red x next to the Azure SDK 1.3, the problem is I have SDK 1.4 Here is the workaround, I go back to the location of the deployed WASM sources. Go into the setup path, then scripts>dependencies>check. Now, locate the CheckAzureSDK.ps1 file, and right-click, then edit. This is the content in the ps1 file, it check for the specific version of the Azure SDK, in this case, it is looking for version 1.3.11133.0038. We need for it to check for version 1.4.20227.1419 Now, save your ps1 file, go back to the open WASM install dialog, and click rescan. This time it should pass, then click next. A Command prompt window will appear, click any key. This completes the installation, click Close.

    Read the article

  • File store: CouchDB vs SQL Server + file system

    - by Andrey
    I'm exploring different ways of storing user-uploaded files (all are MS Office documents or alikes) on our high load web site. It's currently designed to store documents as files and have a SQL database store all metadata for those files. I'm concerned about growing out of the storage server and SQL server performance when number of documents reaches hundreds of millions. I was reading a lot of good information about CouchDB including its built-in scalability and performance, but I'm not sure how storing files as attachments in CouchDB would compare to storing files on a file system in terms of performance. Anybody used CouchDB clusters for storing LARGE amounts of documents and in high load environment?

    Read the article

  • FTP restrict user access to a specific folder

    - by Mahdi Ghiasi
    I have created a FTP Site inside IIS 7.5 panel. Now I have access to whole site using administrator username and password. Now, I want to let my friend access a specific folder of that FTP site. (for example, this path: \some\folder\accessible\) I can't create a whole new FTP Site for this purpose, since it says the port is being used by another website. How to create an account for my friend to have access to just an specific folder? P.S: I have read about User Isolation feature of IIS 7.5, but I couldn't find how to create a user just for FTP and set it to a custom path.

    Read the article

  • Linking Libraries in iOS?

    - by Bob Dole
    This is probably a totally noob question but I have missing links in my mind when thinking about linking libraries in iOS. I usually just add a new library that's been cross compiled and set the build and linker paths without really know what I'm doing. I'm hoping someone can help me fill in some gaps. Let's take the OpenCV library for instance. I have this totally working btw because of a really well written tutorial( http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en ), but I'm just wanting to know what is exactly going on. What I'm thinking is happening is that when I build OpenCV for iOS is that your creating object code that gets placed in the .a files. This object code is just the implementation files( .m ) compiled. One reason you would want to do this is to make it hard to see the source code and so that you don't have to compile that source code every time. The .h files won't be put in the library ( .a ). You include the .h in your source files and these header files communicate with the object code library ( .a ) in some way. You also have to include the header files for your library in the Build Path and the Library itself in the Linker Path. So, is the way I view linking libraries correct? If , not can someone correct me on this ?

    Read the article

  • Getting the PC speaker to beep

    - by broiyan
    There has been much written on getting the beep sound from Ubuntu releases over the years. Example: fixing the beep My needs are slightly different in that I do not want to ensure sound card beeps are functioning. Instead, I want PC speaker beeps, the kind produced by the original built-in speaker because I believe they will produce less CPU load. I have confirmed that my computer has the PC speaker by unplugging the external speakers and shutting down Ubuntu. At some point in the shutdown and restart process a beep is heard even though the external speakers have no power. I have tried the following: In /etc/modprobe.d/blacklist.conf, turn these lines into comments: #blacklist snd_pcsp #blacklist pcspkr In .bashrc /usr/bin/xset b on /usr/bin/xset b 100 Enable in the gnome terminal: Edit Profile Prefs General Terminal Bell Ensure no "mute" selections in: System Prefs Sound various tabs (uncheck them all). Select "Enable window and button sounds" in: System Prefs Sound Sound Effects In gconf-editor desktop gnome sound, select the three sound check boxes. In gconf-editor apps metacity general select the audible bell check box. Still I get no PC speaker beeps when I send code 7 to the console via my Java program or use echo -e '\a' on the bash command line. What else should I try? Update Since my goal is to minimize load on the CPU, here is a comparison of elapsed times. Each test is for 100,000 iterations. Each variant was performed three times so three results are presented for each. printwriter.format("%c", 7); // 1.3 seconds, 1.5 seconds, 1.5 seconds Toolkit.getDefaultToolkit().beep(); // 0.8 seconds, 0.3 seconds, 0.5 seconds try { Runtime.getRuntime().exec("beep"); } catch (IOException e) { } // 10.3 seconds, 16.3 seconds, 11.4 seconds These runs were done inside Eclipse so multiply by some value less than 1 for standalone execution. Unfortunately, Toolkit's beep is silent on my computer and so is code 7. The beep utility works but has the most cost.

    Read the article

  • View a pdf with quick webview though apache proxy

    - by Musa
    I have a site(IIS) that is accessed via a proxy in apache(on an IBM i). This site serves PDFs which has quick web view and if I access a pdf directly from the IIS server the PDFs starts to display immediately but if I go through the proxy I have to wait until the entire pdf downloads before I can view it. In the apache config file I use ProxyPass /path/ http://xxx.xxx.xxx.xxx/ <LocationMatch "/path/"> Header set Cache-Control "no-cache" </LocationMatch> I tried adding SetEnv proxy-sendcl to LocationMatch directive this had no effect. The PDFs that view quickly makes a lot of partial requests This is the initial request and response headers GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip HTTP/1.1 200 OK Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 15330238 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET This is a partial request and response GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept: */* Referer: http://xxx.xxx.xxx.xxx/xxxx.PDF Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip Range: bytes=0-32767 HTTP/1.1 206 Partial Content Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 32768 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Range: bytes 0-32767/15330238 Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET These are the headers I get if I go through he proxy GET /path/xxx.PDF HTTP/1.1 Host: domain:xxxx Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 HTTP/1.1 200 OK Date: Mon, 25 Aug 2014 13:28:42 GMT Server: Microsoft-IIS/7.5 Content-Type: application/pdf Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes ETag: "b6262940bbecf1:0"-gzip X-Powered-By: ASP.NET Cache-Control: no-cache Expires: Thu, 24 Aug 2017 13:28:42 GMT Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=300, max=100 Connection: Keep-Alive Transfer-Encoding: chunked I'm guessing its because the proxy uses Transfer-Encoding: chunked but I'm not sure and wasn't able to turn it off to check. Browser Chrome 36.0.1985.143 m Using the native PDF viewer Any help to get the pdf quick web view through the proxy working would be appreciated.

    Read the article

  • organization of DLL linked functions

    - by m25
    So this is a code organization question. I got my basic code working but when I expand it will be terrible. I have a DLL that I don't have a .lib for. Therefore I have to use the whole loadLibrary()/getprocaddress() combo. it works great. But this DLL that i'm referencing at 100+ functions. my current process is (1) typedef a type for the function. or typedef short(_stdcall *type1)(void); then (2) assign a function name that I want to use such as type1 function_1, then (3) I do the whole LoadLibrary, then do something like function_1 = (type1)GetProcAddress(hinstLib, "_mangled_funcName@5"); normally I would like to do all of my function definitions in a header file but because I have to do use the load library function, its not that easy. the code will be a mess. Right now i'm doing (1) and (2) in a header file and was considering making a function in another .cpp file to do the load library and dump all of the (3)'s in there. I considered using a namespace for the functions so I can use them in the main function and not have to pass over to the other function. Any other tips on how to organize this code to where it is readable and organized? My goals are to be able to use function_1 as a regular function in the main code. if I have to a ref::function_1 that would be okay but I would prefer to avoid it. this code for all practical purposes is just plane C at the moment. thanks in advance for any advice!

    Read the article

  • Coldfusion server VERY slow page loads

    - by Kevin
    I inherited a windows server 2003 coldfusion 7 server a few weeks ago. Today a network cable was unplugged by accident from the server. On plugging it back in, pages were NOT loading at all. Rather, we were receiving a generic coldfusion error page. After restarting IIS several times and coldfusion even more than that, we finally got pages to start loading. However, the loading is extremely slow (30+ seconds) on pages that used to load instantly. Loading through the local network (IE localhost/cfide/administrator) does nothing to help the load speed. I am not familiar with IIS or Coldfusion (We're in the process of migrating this to Linux/PHP), so this is all new territory to me. I'm hoping someone may have experienced this issue in the past and can help me solve it. I'm happy to provide any additional information that might be necessary....I'm just not sure what information you might need in order to help. Thanks for your time.

    Read the article

  • Incorporating libs into module pattern

    - by webnesto
    I have recently started using require.js (along with Backbone.js, jQuery, and a handful of other JavaScript libs) and I love the module pattern (here's a nice synopsis if you're unfamiliar: http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth). Something I'm running up against is best practices on incorporating libs that don't (out of the box) support the module pattern. For example, jQuery without modification is going to load into a global jQuery variable and that's that. Require.js recognizes this and provides an example project for download with a (slightly) modified version of jQuery to incorporate with a require.js project. This goes against everything I've ever learned about using external libs - never modify the source. I can list a ton of reasons. Regardless, this is not an approach I'm comfortable with. I have been using a mixed approach - wherein I build/load the "traditional" JS libraries in a "traditional" way (available in the global namespace) and then using the module pattern for all of my application code. This seems okay to me, but it bugs me because one of the real beauties of the module pattern (no globals) is getting perverted. Anyone else got a better solution to this problem?

    Read the article

  • Nohup & Sass: Process keeps running but, after a while, *.scss files do not get compiled

    - by maurits
    I am using Sass on a CentOS 5.8 server and want it to keep running after SSH logout, so that other users can edit *.scss files for days or even weeks to come without any need to start the program each time they login (in fact, they don't even have SSH access). I have used the following command from this question/answer: $ nohup sass --watch path/to/scss/files:path/to/css/output/files & Then, I log out of the SSH session and the process keeps running. It all works fine (logging in again and using touch to create a test file (test.scss) correctly triggers the creation of the corresponding test.css file) for the first few minutes, but after a while the *.scss files stop getting compiled... However, ps aux | grep 'sass' Shows that the process is still running. Anybody knows what am I doing wrong?

    Read the article

  • Handling FreeBSD package upgrades using pkg_add

    - by larsks
    I'm trying to use FreeBSD's pkg_add command to install and upgrade binary packages in a build-once-install-on-multiple-machines sort of scenario. It works well when installing a new package, but upgrades are baffling me. For example, if I want to upgrade a package that is depended on by another package, I can't just install it: # pkg_add /path/to/somepackage-2.0.tbz pkg_add: package 'somepackage' or its older version already installed At this point, I can delete the older version of the package if I pass -f to the pkg_delete command: # pkg_delete -f somepackage-1.0 pkg_delete: package 'somepackage-1.0' is required by these other packages and may not be deinstalled (but I'll delete it anyway): anotherpackage-1.0 But...and this is the killer...now the dependency information is gone! I can install the upgrade: # pkg_add /path/to/somepackage-2.0.tbz And now attempts to delete it will succeed without any errors: # pkg_delete somepackage-2.0 How do I handle this gracefully (whereby "gracefully" means "in a fashion that preserves dependency information without requiring me to rebuild/reinstall and entire dependency chain"). Thanks!

    Read the article

  • Automatically mount a remote folder on boot

    - by Andrew
    I'm trying to mount a Windows folder on my Ubuntu machine on start up. I've tried following this page here, modifying /etc/fstab and appending sshfs#my_user@remote_host:/path/to/directory <local_mount_point> fuse user 0 0 to it, but it fails; on start up, I get an error saying that the mounting failed, and I can press S to skip or M to recover manually. I also tried following this page here, appending /usr/bin/sshfs -o idmap=user my_user@remote_host:/path/to/directory <local_mount_point> to the /etc/rc.local file, but this doesn't help either; Ubuntu just boots up normally without mounting. I have Cygwin installed on my Windows machine, and I can run everything smoothly, such as sshing without passwords, and mounting it manually. I've also tried to run the modified rc.local file $ /etc/rc.local, and it works perfectly, but I just can't seem to get the folder mounted on start up. Can someone help me?

    Read the article

  • Windows 8 BIOS - Boot Ubuntu from External HDD

    - by F3AR3DLEGEND
    My laptop came pre-loaded with Windows 8 64-bit (only storage device is a 128 GB SSD). Since it is my school laptop/I've heard creating a Linux partition alongside Windows 8 is not very wise I installed Ubuntu onto my external hard drive. I have a 500GB external HDD with the following partitions: Main Partition - NFTS - ~400 GB Extension Partition / - ext2 - ~25gb /home - ext2 - ~30gb swap - ext2 - 10gb /boot - ? - 10gb ? = not sure of partition Using the PenDriveLinux installer, I created a LiveUSB version of Ubuntu 12.04 (LTS) on a 4GB USB drive. Using that, I installed Ubuntu onto the external hard-drive, without any errors (or at least none that I was notified of). Using the BIOS settings, I changed the OS-loading order so that it is in this order: My External USB HDD Windows Boot Loader Some other things Therefore, Ubuntu should load from my hard drive first, but it doesn't. Also, my hard drive is in working condition, and it turns on when BIOS starts (there is a light indicator). When I start my laptop, it goes directly to Windows 8 (I have the fast startup setting disabled as well). So, is there any way for me to set it up so that when my HDD is connected, it will automatically load Ubuntu? Thanks in advance!

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >