Search Results

Search found 26912 results on 1077 pages for 'default programs'.

Page 225/1077 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Random server lag, no CPU/mem/pagefile usage

    - by Kev
    We have a fairly new server running Windows 2003 SP2, and the past few days we've noticed random slowdowns. When I'm logged into the server over remote desktop while this is happening, or if I'm physically sitting at the server logged in, suddenly everything becomes extremely laggy. Any UI element I try to interact with takes upwards of ten seconds to react, and then responds very slowly. Then a minute later everything is quite snappy again. During this, I have Task Manager minimized to the tray, and there's no CPU usage. I open it up right after this happens, and there's very little CPU usage on the graph, and no memory or pagefile usage above normal. (Normal being 1.5 GB free in the case of memory.) This is what I see logged into the server, and then users start calling saying things are slow, timing out, and failing--anything to do with our server. No events in the Event Viewer around the times this happens. The context I'm working in (last thing I clicked, etc.) seems different every time--different programs active, different combinations of programs open. Never anything particularly stressful (like adding an event entry to a Cobian Backup configuration, or editing text in TextPad, which has been exceptionally stable in my extensive usage of it.) I would've thought it was just the server, but a family member's home PC (entirely separate) running WinXPSP3 had the same thing happen to it last night a few times. Is this some new behaviour introduced by the latest Windows Updates? Either way, where do I even start to look when nothing seems to be chewing up resources?

    Read the article

  • Opera's problem with magnet-links

    - by Dmitriy Matveev
    I've encountered following problem while using Opera web browser: When I click on a magnet link on some web page like magnet:?xt=urn:tree:tiger:CXW6MJFRNOEFU2STCBWWOIYZLVCR2FTR37SQCXY&xl=352342016&dn=ER%20-%207x16%20-%20Witch%20Hunt.avi Opera is asking me if I want to open that link with my DC++ client. If I click 'Yes' button then my DC++ client is correctly "opening" the clicked magnet link and performing some action on it. There is also an option "Do not show this dialog again" in Opera's dialog, but it doesn't seem to be working correctly. If I check that option before answering 'Yes' and then click on other magnet link of the same kind the Opera will again ask me about how to open new link. I haven't found protocol association in 'Control Panel Default Programs Set Associations' part of my Windows Vista settings, but if I paste magnet link in "Run" dialog then Vista will handle that link perfectly. I've tried to find out how to manually set protocol association in Opera and found 'Programs' page of browser's advanced settings. There I discovered that instead of storing protocol to application associations Opera tries to store per-link associations (There are several entries with exact links as they was on web page as value of protocol field). If I click on the links which are already stored in Opera's protocols associations browser will ask me about them again. I haven't found any information on how to resolve this problem on the internet, maybe someone on this site will be able to help me.

    Read the article

  • Linux security: The dangers of executing malignant code as a standard user

    - by AndreasT
    Slipping some (non-root) user a piece of malignant code that he or she executes might be considered as one of the highest security breaches possible. (The only higher I can see is actually accessing the root user) What can an attacker effectively do when he/she gets a standard, (let's say a normal Ubuntu user) to execute code? Where would an attacker go from there? What would that piece of code do? Let's say that the user is not stupid enough to be lured into entering the root/sudo password into a form/program she doesn't know. Only software from trusted sources is installed. The way I see it there is not really much one could do, is there? Addition: I partially ask this because I am thinking of granting some people shell (non-root) access to my server. They should be able to have normal access to programs. I want them to be able to compile programs with gcc. So there will definitely be arbitrary code run in user-space...

    Read the article

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • How to set an executable white list?

    - by izabera
    Under Linux, is it possible to set a white-list of executables for a certain group of users? I need them to be unable to use, for example, make, gcc and executables on removable disks. How can this be done? Edit, let me explain better. I'm dealing with a high school IT system, young geeks that (during the lessons) want to play, surf the net, damage those computer however they can. The major step to achieve this goal was to remove the system they're familiar with and install Ubuntu in all the computers. This actually works quite well, but recent events proved that this is not enough. I want to allow them to execute certain safe programs, like Open Office, and to deny any other program, whether it is preinstalled software, something they carry in usb drives, a downloaded program or a script they program on site. It's possible to remove the 'x' permission on any file on the pc, but of course it would be impractical. Furthermore, they would be able to run anything they download. I thought the best solution would be to make a white-list of safe programs and to deny anything else, but I don't really know how to do it. Any idea is helpful.

    Read the article

  • Why is only one Excel spreadsheet crippled, but others are fine?

    - by Dallas
    I have an inherited spreadsheet that I really don't want to rebuild at the moment. It's a simple small workbook that is small (< 200 rows that don't even reach to AA) and does nothing more than calculate some totals within the same worksheets. No macros, no external data sources, nothing beyond basic formatting of dates, numbers and strings. I see importing data from CSV/text has created many many workbook connections over time, but even if I delete them all (there were hundreds) it makes no difference in performance. Even clicking to simply change focus from cell to cell takes 10+ seconds, adorned by the spinning cursor and (Not Responding) appending to the title bar and the application locking up. The program seems to "recover" every time, but efficiency of editing this file is obviously seriously handicapped. All other files seem fine in Excel, and other programs have no apparent performance issues. I see Excel is chewing up CPU but I'm not sure how to narrow down what process or service is "clashing" with Excel. I tried the same file on other computers and performance is fine. If I turn off all start-up services and run only Excel, performance is restored... until I start using other programs and then it bogs down again. At this point, I would entertain almost any idea, theory or suggestion that helps pinpoint, solve or work around the issue.

    Read the article

  • Have .cmd in startup wait for system wide startup to run.

    - by Dan
    On a computer I'm not an administrator on, there is a startup file (.cmd file in C:\Documents and Settings\All Users\Start Menu\Programs\Startup) for all users. It does several things I don't like, so I have created my own .cmd file which I have placed in C:\Documents and Settings\[user]\Start Menu\Programs\Startup. I want my code to run after the system wide, because my program first undoes the network mappings the system wide file did, and then replaces it with my own network mappings. How can I make my program wait and start as soon as the system wide one is done? (I cannot remove or edit the system wide file). Thanks EDIT: The time that the system wide file takes to runs varies and I would like my file to run right after. The "If Exists" method seems a little to contingent since the system wide script can change or a file could be moved. I am hoping to give my script to a few of my coworkers, so hoping to have it work without me having to update anything. Also, being a linux guy, I don't know cmd code, so please write out any coding suggestions.

    Read the article

  • WINDOWS: Your computer hangs. You can windows + R (run dialog) but performance is so halted taskMGR

    - by John Sullivan
    The question is, what process are available to try to recover from total system instability before pulling the plug when we can do nothing but programs or batches in the path from the run dialog (windows + r key), and performance is so dead that taskMGR / procEXP / other programs with visual guis are not usable? I am not a windows expert, but ideally someone out there has written a program that does more or less stuff like this: Immediately set (or perhaps I can set from the run prompt) its priority to extremely high, evaluate performance bottlenecks. E.g. is CPU 100%? If so identify offending program(s) or problems. Attempt / log fixes, then provide crude feedback asking the user if his performance has stabilized enough to abort, wait a few seconds, if no feedback continue, etc. etc. Eventually try to do any "system cleanup" if the program decides it cannot recover and perhaps finally provide a series of beeps to the user, or what have you, to say "OK, I give up, time to pull the plug". Ideally create a log, when able. These kinds of horrible hangs are a situation where surely trying something, anything, is better than nothing -- as long as that something is intelligent -- when the alternative is ripping out the power coord. Again, I am not a windows expert, so perhaps there is a much more elegant "hands on" approach I am not aware of.

    Read the article

  • Best way to create and restore a Drive Image with Windows 7? [closed]

    - by jasondavis
    Possible Duplicate: Want to create a system image I am about to build a new PC. I am a windows 7 user. For years now I have been wanting to install windows and all my favorite software, music, etc., and then make a drive IMAGE and be able to go in 6 months later or WHENEVER I want to start fresh and completely format my drives and restore my IMAGE and have all my settings, programs, etc be just5 like when I created the original image. I know there is many ways to do this but I have never done this 100% successfully and I have about a week to figure out how to do it perfectly for when I build my new PC. I have heard good things about using tried Acronis true image in the PAST for doing what I describe4, I tried using it but, but the newer versions are overly complex and don't even seem to work the way I hoped. I also see that Windows 7 has some sort of drive IMAGE creator itself as well. Does the newer Windows 7 image creator do what I am describing above? If it does do what I am asking for (complete drive image with windows, all programs and settings) saved to an IMAGE file that can easily be restored to ANY hard drive in the future? Please share your experiences, tips, ideas on how to achieve this the easiest and most reliable way please

    Read the article

  • Microphone doesn't work

    - by mandy
    I'm having a trouble with my built in microphone. Even if I use headphone with mic, it doesn't really work. The weird thing is, if I clap the green lines of the speaker icon jumps, but if I speak it doesn't. I have also tried some recordings, but I can not hear myself and adjusting the volume didn't help at all. I tried to restore it, still no change. I have updated it in the device manager, but it said there that it's up to date and the devices are working properly. Until I decided to recover the whole system (wherein I pressed zero and switch on button) to my surprise, the settings became different, most programs were deleted, even my files. It's like it was formatted and I'm so sad that the mic was not fixed. I really don't know what to do now. My laptop model is Toshiba satellite m840. I want to it return to its settings/set up just before I Recover the system and bring back all the programs that we're installed and of course, most of all, to fix my microphone so I can use Skype again and other video calling application.. I hope someone could help me. Thanks a lot!

    Read the article

  • Can I install windows on an SSD and access data from my old windows HDD?

    - by nzifnab
    I purchased new computer components, switching my hardware from AMD and Radeon to Intel and Nvidia. I kept components from my old computer like the powersupply and two HDDs. Everything appeared to install correctly and the system booted into the BIOS just fine (after a brief snafu with the CPU fan). My goal was to use the two harddrives and just be able to turn on the computer and load up my old windows install with all the files, programs, and documents. I expected to have to call Microsoft to re-register the windows install for the new hardware (since I had to do that last time I upgraded w/ the same windows version). When the computer attempts to boot into windows it briefly flashes a bluescreen and then restarts. System recovery gives a message something like "BadDriver Failover" something something. I assume this is because it's trying to use amd drivers for an intel chipset (or something...?) and I've been as-yet unsuccessful in getting it to boot into my old windows partition. SO! I decided eff it, maybe I'll go visit my nearest Micro Center and buy a 200 GB SSD, install windows onto that, and then... be able to access just the contents of both of my other harddrives? I don't intend on running any of the programs but there were some saved files I would like to salvage from the 500 GB harddrive, The 1.5 TB harddrive only had files on it, no OS or applications so I'd also expect to still be able to access it. Is this possible? Can I install the SSD and only format/install windows onto it and still access the contents from my two transferred-over drives?

    Read the article

  • Read non-blocking from multiple fifos in parallel

    - by Ole Tange
    I sometimes sit with a bunch of output fifos from programs that run in parallel. I would like to merge these fifos. The naïve solution is: cat fifo* > output But this requires the first fifo to complete before reading the first byte from the second fifo, and this will block the parallel running programs. Another way is: (cat fifo1 & cat fifo2 & ... ) > output But this may mix the output thus getting half-lines in output. When reading from multiple fifos, there must be some rules for merging the files. Typically doing it on a line by line basis is enough for me, so I am looking for something that does: parallel_non_blocking_cat fifo* > output which will read from all fifos in parallel and merge the output on with a full line at a time. I can see it is not hard to write that program. All you need to do is: open all fifos do a blocking select on all of them read nonblocking from the fifo which has data into the buffer for that fifo if the buffer contains a full line (or record) then print out the line if all fifos are closed/eof: exit goto 2 So my question is not: can it be done? My question is: Is it done already and can I just install a tool that does this?

    Read the article

  • Group Policy - Published software not upgrading

    - by VokinLoksar
    I'm testing this with mercurial MSIs, but it's the same for other packages. I've created a new group policy and added an old version of mercurial to User software installation as a Published package. On a Windows 7 client I install the package through Programs and Features. The installation works fine. Now, I would like to publish an updated version of mercurial. I create a new Published package. Under 'Upgrades' I configure it to replace (upgrade also doesn't work) the old version and mark this upgrade as 'Required'. The old package is not removed. The Windows 7 client is then restarted. When I log back in, I see a status message saying something like 'Removing managed software Mercurial ...'. There is no message about installation of the upgrade. If I look in Programs and Features, I can see the new version of mercurial listed. However, the actual mercurial directory under Program Files is missing. It's as though the installation recorded information about the MSI, but didn't actually install anything after removing the old version. As I mentioned, this isn't specific to mercurial. I've tried using other apps and have yet to find one that can be upgraded via a Published package. Using Assigned packages in Computer Configuration works without problems, but I would like this software to be optional rather than required. Ideas?

    Read the article

  • How to prevent the command prompt from closing after execution?

    - by Sk8erPeter
    My problem is that in Windows, there are command line windows that close immediately after execution. To solve this, I want the default behavior to be that the window is kept open. Normally, this behavior can be avoided with three methods that come to my mind: Putting a pause line after batch programs to prompt the user to press a key before exiting Running these batch files or other command line manipulating tools (even service starting, restarting, etc. with net start xy or anything similar) within cmd.exe(Start - Run - cmd.exe) Running these programs with cmd /k like this: cmd /k myprogram.bat But there are some other cases in which the user: Runs the program the first time and doesn't know that the given program will run in Command Prompt (Windows Command Processor) e.g. when running a shortcut from Start menu (or from somewhere else), OR Finds it a little bit uncomfortable to run cmd.exe all the time and doesn't have the time/opportunity to rewrite the code of these commands everywhere to put a pause after them or avoid exiting explicitly. I've read an article about changing default behavior of cmd.exe when opening it explicitly, with creating an AutoRun entry and manipulating its content in these locations: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Command Processor\AutoRun HKEY_CURRENT_USER\SOFTWARE\Microsoft\Command Processor\AutoRun (The AutoRun items are _String values_...) I put cmd /d /k as a value of it to give it a try, but this didn't change the behaviour of the stuffs mentioned above at all... It just changed the behaviour of the command line window when opening it explicitly (Start-Run-cmd.exe). So how does it work? Can you give me any ideas to solve this problem?

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • With Ubuntu 12.04 unlike 11.04 Wine installed application start menu links are missing

    - by Ron Whites
    With Ubuntu 12.04 and wine 1.4 unlike ubuntu 11.04 with wine 1.2.2 installed application start menu links are missing. For instance with Ubuntu 11.04 including Wine I can install one of our Windows applications and then can go to Applications Wine Programs Semantic Designs TestCoverage Documentation to bring up the documentation for how to run our tool. Unfortunately with Ubuntu 12.04 the Applications menu is gone and going to Dash I do see "Recent Apps and more apps" but my installed Wine application and related documentation link is shown present, even though the wine uninstaller shows in present. I found this online suggestion and tried using the gnome "main menu"... Windows key to launch the Dash. Enter "Main Menu" in the search field and open the old Edit Main Menu app. Select the Category (aka Unity Dash Filter) you want the item in. Name the Dash/Launcher Item Add the Command to launch said app With "mainmenu" then get down to the TestCoverage Documentation and I could see a command link in properties of .. env WINEPREFIX="/home/sdtest/.wine" wine C:\windows\command\start.exe /Unix /home/sdtest/.wine/dosdevices/c:/users/sdtest/Start\ Menu/Programs/Semantic\ Designs/Test\ Coverage/Java\ 1.7\ Documentation.lnk BUT I could not execute this link to view the installed documentation. So I copied the link properties into a file, set it as executable, and ran it as a bash script and the documentation came up! So why can't I use this link under main menu?

    Read the article

  • Load balance to proxies

    - by LoveRight
    I have installed several proxy programs whose IP addresses are, for example, 127.0.0.1:8580(use http), 127.0.0.1:9050(use socks5). You may regrard them as Tor and its alternatives. You know, certain proxy programs are faster than others at times, while at other times, they would be slower. The Firefox add-in, AutoProxy and FoxyProxy Standard, can define a list of rules such as any urls matching the pattern *.google.com should be proxied to 127.0.0.1:8580 using socks5 protocol. But the rule is "static". I want *.google.com to be proxied to the fastest proxy, no matter which one. I think that is kind of load balancing. I thought I could set a rule that direct request of *.google.com to the address the load balancer listens, and the load balancer forwards the request to the fastest real proxy. I notice that tor uses socks5 protocol and some other applications use http. I feel confused that which protocol should the load balancer use. I also start to wonder about the feasibility of this solution. Any suggestions? My operating system is Windows 7 x64.

    Read the article

  • Adding SSD as boot drive to existing system

    - by thegrinner
    I recently bought two 128GB SSDs that I'm planning on adding (RAID 0) to a system I currently have on a 1TB HDD. I'm hoping to redo the disk space such that the SSDs act as the boot drive (only other items would be things I install there explicitly) while the majority of my system is on the HDD - documents, media, program files. Something like this: SSD = [ OS | Explicitly placed programs] HDD = [ Program Files | Media | Documents | etc] I have an external drive capable of holding all the data I want to save, so the backup isn't too much of a concern. What I'm worried about is how I should go about doing this - do I need to do a clean install on the SSDs, reformat the HDD, move things like Program Files/Users to the HDD, and then restore data (not full programs but things like saves)? Should I be using one of the regedit hacks I've seen around to change the default install directories instead of moving program files and users? Should I have the actual folders on the HDD and symlinks on the SSD? Or is there a better solution? Do I need to disconnect my HDD while doing the clean Windows install?

    Read the article

  • Windows scroll without focus

    - by DanielCardin
    So I have a windows 8 laptop at home, and a windows 7 laptop at work. Both have synaptics touchpads. The problem is that on the work laptop, I can scroll any window regardless of which one is currently focused. That is the behavior that I want on both computers. This does not currently happen on the windows 8 computer. I know I can use (and have tried!) wizmouse, alwaysmousewheel, katmouse, etc; but none of them work 100% like the work computer. katmouse sometimes stops working, alwaysmousewheel, ive had problems with it scrolling on its own, wizmouse sometimes makes the mouse lag. Others have just not worked. Before I got the work computer, I had resigned myself to it, but now I see that it works, out of the box without using any external programs, on an older operating system, and wonder why I cant get it to work the same way on my own computer! All my searches have just been people suggesting the external programs that ive already tried, so answers suggesting those aren't really what I'm looking for (unless its some magic I can do with the synaptics driver, which by the way is more up to date on the windows 8 computer that is doesnt work on).

    Read the article

  • Metro: Introduction to the WinJS ListView Control

    - by Stephen.Walther
    The goal of this blog entry is to provide a quick introduction to the ListView control – just the bare minimum that you need to know to start using the control. When building Metro style applications using JavaScript, the ListView control is the primary control that you use for displaying lists of items. For example, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: Create a data source Create an Item Template Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/22/metro-namespaces-and-modules.aspx Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/27/metro-using-templates.aspx Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control.

    Read the article

  • Quartz.Net Windows Service Configure Logging

    - by Tarun Arora
    In this blog post I’ll be covering, Logging for Quartz.Net Windows Service 01 – Why doesn’t Quartz.Net Windows Service log by default 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc 03 – Results: Logging in action If you are new to Quartz.Net I would recommend going through, A brief Introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service Writing & Scheduling your First HelloWorld job with Quartz.Net   01 – Why doesn’t Quartz.Net Windows Service log by default If you are trying to figure out why… The Quartz.Net windows service isn’t logging The Quartz.Net windows service isn’t writing anything to the event log The Quartz.Net windows service isn’t writing anything to a file How do I configure Quartz.Net windows service to use log4Net How do I change the level of logging for Quartz.Net Look no further, This blog post should help you answer these questions. Quartz.NET uses the Common.Logging framework for all of its logging needs. If you navigate to the directory where Quartz.Net Windows Service is installed (I have the service installed in C:\Program Files (x86)\Quartz.net, you can find out the location by looking at the properties of the service) and open ‘Quartz.Server.exe.config’ you’ll see that the Quartz.Net is already set up for logging to ConsoleAppender and EventLogAppender, but only ‘ConsoleAppender’ is set up as active. So, unless you have the console associated to the Quartz.Net service you won’t be able to see any logging. <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="ConsoleAppender" /> <!-- uncomment to enable event log appending --> <!-- <appender-ref ref="EventLogAppender" /> --> </root> </log4net> Problem: In the configuration above Quartz.Net Windows Service only has ConsoleAppender active. So, no logging will be done to EventLog. More over the RollingFileAppender isn’t setup at all. So, Quartz.Net will not log to an application trace log file. 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc Let’s change this behaviour by changing the config file… In the below config file, I have added the RollingFileAppender. This will configure Quartz.Net service to write to a log file. (<appender name="GeneralLog" type="log4net.Appender.RollingFileAppender">) I have specified the location for the log file (<arg key="configFile" value="Trace/application.log.txt"/>) I have enabled the EventLogAppender and RollingFileAppender to be written to by Quartz. Net windows service Changed the default level of logging from ‘Info’ to ‘All’. This means all activity performed by Quartz.Net Windows service will be logged. You might want to tune this back to ‘Debug’ or ‘Info’ later as logging ‘All’ will produce too much data to the logs. (<level value="ALL"/>) Since I have changed the logging level to ‘All’, I have added applicationSetting to remove logging log4Net internal debugging. (<add key="log4net.Internal.Debug" value="false"/>) <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <sectionGroup name="common"> <section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging" /> </sectionGroup> </configSections> <common> <logging> <factoryAdapter type="Common.Logging.Log4Net.Log4NetLoggerFactoryAdapter, Common.Logging.Log4net"> <arg key="configType" value="INLINE" /> <arg key="configFile" value="Trace/application.log.txt"/> <arg key="level" value="ALL" /> </factoryAdapter> </logging> </common> <appSettings> <add key="log4net.Internal.Debug" value="false"/> </appSettings> <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="GeneralLog" type="log4net.Appender.RollingFileAppender"> <file value="Trace/application.log.txt"/> <appendToFile value="true"/> <maximumFileSize value="1024KB"/> <rollingStyle value="Size"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss} [%t] %-5p %c - %m%n"/> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="ConsoleAppender" /> <appender-ref ref="EventLogAppender" /> <appender-ref ref="GeneralLog"/> </root> </log4net> </configuration>   Note – Please ensure you restart the Quartz.Net Windows service for the config changes to be picked up by the service   03 – Results: Logging in action Once you start the Quartz.Net Windows Service up, the logging should be initiated to write all activities in the Console, EventLog and File… See screen shots below… Figure – Quartz.Net Windows Service logging all activity to the event log Figure – Quartz.Net Windows Service logging all activity to the application log file Where is the output from log4Net ConsoleAppender? As a default behaviour, the console isn't available in windows services, web services, windows forms. The output will simply be dismissed. Unless you are running the process interactively. Which you can do by firing up Quartz.Server.exe –i to see the output   This was fourth in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering troubleshooting why a scheduled task hasn’t fired on Quartz.net windows service. All Quartz.Net specific blog posts can listed here. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Validation in Silverlight

    - by Timmy Kokke
    Getting started with the basics Validation in Silverlight can get very complex pretty easy. The DataGrid control is the only control that does data validation automatically, but often you want to validate your own entry form. Values a user may enter in this form can be restricted by the customer and have to fit an exact fit to a list of requirements or you just want to prevent problems when saving the data to the database. Showing a message to the user when a value is entered is pretty straight forward as I’ll show you in the following example.     This (default) Silverlight textbox is data-bound to a simple data class. It has to be bound in “Two-way” mode to be sure the source value is updated when the target value changes. The INotifyPropertyChanged interface must be implemented by the data class to get the notification system to work. When the property changes a simple check is performed and when it doesn’t match some criteria an ValidationException is thrown. The ValidatesOnExceptions binding attribute is set to True to tell the textbox it should handle the thrown ValidationException. Let’s have a look at some code now. The xaml should contain something like below. The most important part is inside the binding. In this case the Text property is bound to the “Name” property in TwoWay mode. It is also told to validate on exceptions. This property is false by default.   <StackPanel Orientation="Horizontal"> <TextBox Width="150" x:Name="Name" Text="{Binding Path=Name, Mode=TwoWay, ValidatesOnExceptions=True}"/> <TextBlock Text="Name"/> </StackPanel>   The data class in this first example is a very simplified person class with only one property: string Name. The INotifyPropertyChanged interface is implemented and the PropertyChanged event is fired when the Name property changes. When the property changes a check is performed to see if the new string is null or empty. If this is the case a ValidationException is thrown explaining that the entered value is invalid.   public class PersonData:INotifyPropertyChanged { private string _name; public string Name { get { return _name; } set { if (_name != value) { if(string.IsNullOrEmpty(value)) throw new ValidationException("Name is required"); _name = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Name")); } } } public event PropertyChangedEventHandler PropertyChanged=delegate { }; } The last thing that has to be done is letting binding an instance of the PersonData class to the DataContext of the control. This is done in the code behind file. public partial class Demo1 : UserControl { public Demo1() { InitializeComponent(); this.DataContext = new PersonData() {Name = "Johnny Walker"}; } }   Error Summary In many cases you would have more than one entry control. A summary of errors would be nice in such case. With a few changes to the xaml an error summary, like below, can be added.           First, add a namespace to the xaml so the control can be used. Add the following line to the header of the .xaml file. xmlns:Controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.Input"   Next, add the control to the layout. To get the result as in the image showed earlier, add the control right above the StackPanel from the first example. It’s got a small margin to separate it from the textbox a little.   <Controls:ValidationSummary Margin="8"/>   The ValidationSummary control has to be notified that an ValidationException occurred. This can be done with a small change to the xaml too. Add the NotifyOnValidationError to the binding expression. By default this value is set to false, so nothing would be notified. Set the property to true to get it to work.   <TextBox Width="150" x:Name="Name" Text="{Binding Name, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True}"/>   Data annotation Validating data in the setter is one option, but not my personal favorite. It’s the easiest way if you have a single required value you want to check, but often you want to validate more. Besides, I don’t consider it best practice to write logic in setters. The way used by frameworks like WCF Ria Services is the use of attributes on the properties. Instead of throwing exceptions you have to call the static method ValidateProperty on the Validator class. This call stays always the same for a particular property, not even when you change the attributes on the property. To mark a property “Required” you can use the RequiredAttribute. This is what the Name property is going to look like:   [Required] public string Name { get { return _name; } set { if (_name != value) { Validator.ValidateProperty(value, new ValidationContext(this, null, null){ MemberName = "Name" }); _name = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Name")); } } }   The ValidateProperty method takes the new value for the property and an instance of ValidationContext. The properties passed to the constructor of the ValidationContextclass are very straight forward. This part is the same every time. The only thing that changes is the MemberName property of the ValidationContext. Property has to hold the name of the property you want to validate. It’s the same value you provide the PropertyChangedEventArgs with. The System.ComponentModel.DataAnnotation contains eight different validation attributes including a base class to create your own. They are: RequiredAttribute Specifies that a value must be provided. RangeAttribute The provide value must fall in the specified range. RegularExpressionAttribute Validates is the value matches the regular expression. StringLengthAttribute Checks if the number of characters in a string falls between a minimum and maximum amount. CustomValidationAttribute Use a custom method to validate the value. DataTypeAttribute Specify a data type using an enum or a custom data type. EnumDataTypeAttribute Makes sure the value is found in a enum. ValidationAttribute A base class for custom validation attributes All of these will ensure that an validation exception is thrown, except the DataTypeAttribute. This attribute is used to provide some additional information about the property. You can use this information in your own code.   [Required] [Range(0,125,ErrorMessage = "Value is not a valid age")] public int Age {   It’s no problem to stack different validation attributes together. For example, when an Age is required and must fall in the range from 0 to 125:   [Required, StringLength(255,MinimumLength = 3)] public string Name {   Or in one row like this, for a required Name with at least 3 characters and a maximum of 255:   Delayed validation Having properties marked as required can be very useful. The only downside to the technique described earlier is that you have to change the value in order to get it validated. What if you start out with empty an empty entry form? All fields are empty and thus won’t be validated. With this small trick you can validate at the moment the user click the submit button.   <TextBox Width="150" x:Name="NameField" Text="{Binding Name, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True, UpdateSourceTrigger=Explicit}"/>   By default, when a TwoWay bound control looses focus the value is updated. When you added validation like I’ve shown you earlier, the value is validated. To overcome this, you have to tell the binding update explicitly by setting the UpdateSourceTrigger binding property to Explicit:   private void SubmitButtonClick(object sender, RoutedEventArgs e) { NameField.GetBindingExpression(TextBox.TextProperty).UpdateSource(); }   This way, the binding is in two direction but the source is only updated, thus validated, when you tell it to. In the code behind you have to call the UpdateSource method on the binding expression, which you can get from the TextBox.   Conclusion Data validation is something you’ll probably want on almost every entry form. I always thought it was hard to do, but it wasn’t. If you can throw an exception you can do validation. If you want to know anything more in depth about something I talked about in this article let me know. I might write an entire post to that.

    Read the article

  • Errors trying to run MongoDB

    - by SomeKittens
    I'm running Ubuntu Server 12.04 (32 bit) on an old (1998) computer. Everything's working fine until I try and start MongoDB. somekittens@DLserver01:~$ mongo MongoDB shell version: 2.2.2 connecting to: test Sun Dec 16 22:47:50 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed Googling the error lead me to all sorts of "repair" options, none of which fixed anything. I've also removed MongoDB and installed it again (using apt-get, have not built from source). Mongo's log shows the following error: Thu Dec 13 18:36:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Dec 13 18:36:32 Thu Dec 13 18:36:32 [initandlisten] MongoDB starting : pid=758 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01 Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Dec 13 18:36:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Dec 13 18:36:32 [initandlisten] ** with --journal, the limit is lower Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] db version v2.2.2, pdfile version 4.5 Thu Dec 13 18:36:32 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Thu Dec 13 18:36:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Dec 13 18:36:32 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Dec 13 18:36:32 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Dec 13 18:36:32 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Dec 13 18:36:32 dbexit: Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close listening sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to flush diaglog... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: waiting for fs preallocator... Thu Dec 13 18:36:32 [initandlisten] shutdown: closing all files... Thu Dec 13 18:36:32 [initandlisten] closeAllFiles() finished Thu Dec 13 18:36:32 dbexit: really exiting now Running through the recovery instructions lead to the following adventure: somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 [initandlisten] MongoDB starting : pid=1887 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:42:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:42:54 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:42:54 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:42:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:42:54 [initandlisten] options: { repair: true } Sun Dec 16 22:42:54 [initandlisten] exception in initAndListen: 10296 ********************************************************************* ERROR: dbpath (/data/db/) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo ********************************************************************* , terminating Sun Dec 16 22:42:54 dbexit: Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:42:54 [initandlisten] shutdown: closing all files... Sun Dec 16 22:42:54 [initandlisten] closeAllFiles() finished Sun Dec 16 22:42:54 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data/db somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 [initandlisten] MongoDB starting : pid=1909 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:43:51 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:43:51 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:43:51 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:43:51 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:43:51 [initandlisten] options: { repair: true } Sun Dec 16 22:43:51 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Dec 16 22:43:51 dbexit: Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:43:51 [initandlisten] shutdown: closing all files... Sun Dec 16 22:43:51 [initandlisten] closeAllFiles() finished Sun Dec 16 22:43:51 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:43:51 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Dec 16 22:43:51 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ service mongodb stop stop: Unknown instance: somekittens@DLserver01:/var/log/mongodb$ sudo mongod --repair Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 [initandlisten] MongoDB starting : pid=1921 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:45:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:45:04 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:45:04 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:45:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:45:04 [initandlisten] options: { repair: true } Sun Dec 16 22:45:04 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Sun Dec 16 22:45:04 [initandlisten] finished checking dbs Sun Dec 16 22:45:04 dbexit: Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:45:04 [initandlisten] shutdown: closing all files... Sun Dec 16 22:45:04 [initandlisten] closeAllFiles() finished Sun Dec 16 22:45:04 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:45:04 dbexit: really exiting now Which didn't change anything. What can I do to resolve this? It's an old computer (640MB RAM, single-core P2). Could that be causing it?

    Read the article

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >