Search Results

Search found 87957 results on 3519 pages for 'system error'.

Page 740/3519 | < Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >

  • IIS 7 + ASP.NET 4

    - by user26712
    Hello, I have an ASP.NET application that I am trying to convert to an ASP.NET 4 application. The application is fairly simple. I have created a new web application in IIS 7.5 pointing to the directory that the ASP.NET application exists in. When I attempt to execute the application, but entering http://localhost%3A%5Bport%5D into my browser, I receive the following error: Error Summary HTTP Error 500.21 - Internal Server Error Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list Most likely causes: * Managed handler is used; however, ASP.NET is not installed or is not installed completely. * There is a typographical error in the configuration for the handler module list.

    Read the article

  • Bad DMA/do_IRQ errors on suspend/resume, with occasional freezing

    - by Steve Kroon
    Every time I suspend or resume my laptop (Dell Latitude E6520, bought this year), I get 2 messages of the form displayed on the console just before shutting down/starting up: [ 407.107610] ehci_hcd 0000:00:1d.0: dma_pool_free buffer-128, f6f18000/36f18000 (bad dma) On occasion, I get a message of the form: [ 3753.979066] do_IRQ: 0.177 No irq handler for vector (irq -1) On occasion, my machine freezes with a flashing Caps Lock button when suspending, after which I need to do a hard shutdown. This never happened before the messages started appearing (a while back), and I think it never happens without a do_IRQ message appearing (although I'm not sure about that). [There's nothing in the owner's manual on a flashing Caps Lock button; apparently it may be a kernel panic if the scroll lock also flashes, but the laptop doesn't have a scroll lock light, and there's no message on the console saying kernel panic.] Are these bad DMA/do IRQ messages serious, and what can I do to investigate/troubleshoot them and the freezing? Edit: I've also now received the following error messages a few times: [246943.023908] JBD: I/O error detected when updating journal superblock for sdb1. [246943.023958] Buffer I/O error on device sdb1, logical block 0 [246943.023996] EXT3-fs (sdb1): I/O error while writing superblock Edit: Output of dmesg at http://pastebin.com/ra7MTQEj ; contents of /var/log/kern.log at http://pastebin.com/i6jf0Md9 Edit: the output of some smartctl (-a, -x, --log=error, --log=xerror) instructions is available at http://paste.ubuntu.com/1088488/ . Edit (31/8/2012): Output of dmesg|grep -i ehci available at http://paste.ubuntu.com/1177246/ .

    Read the article

  • Solaris 11 : les nouveautés vues par les équipes de développement

    - by Eric Bezille
    Translate in English  Pour ceux qui ne sont pas dans la liste de distribution de la communauté des utilisateurs Solaris francophones, voici une petite compilation de liens sur les blogs des développeurs de Solaris 11 et qui couvre en détails les nouveautés dans de multiples domaines.  Les nouveautés côté Desktop What's new on the Solaris 11 Desktop ? S11 X11: ye olde window system in today's new operating system Accessible Oracle Solaris 11 - released ! Les outils de développements Nagging As a Strategy for Better Linking: -z guidance Much Ado About Nothing: Stub Objects Using Stub Objects The Stub Proto: Not Just For Stub Objects Anymore elffile: ELF Specific File Identification Utility Le nouveau système de packaging : Image Packaging System (IPS) Replacing the Application Packaging Developer's guide IPS Self-assembly - Part 1: overlays Self Assembly - Part 2: Multiple Packages Delevering configuration La sécurité renforcée dans Solaris Completely disabling root logins in Solaris 11 Passwork (PAM) caching for Solaris su - "a la sudo" User home directory encryption with ZFS My 11 favorite Solaris 11 features (autour de la sécurité) - par Darren Moffat Exciting crypto advances with the T4 processor and Oracle Solaris 11 SPARC T4 OpenSSL Engine Solaris AESNI OpenSSL Engine for Intel Westmere Gestion et auto-correction d'incident - "Self-Healing" : Service Management Facility (SMF) & Fault Management Architecture (FMA)  Introducing SMF Layers Oracle Solaris 11 - New Fault Management Features Virtualisation : Oracle Solaris Zones These are 11 of my favorite things! (autour des zones) - par Mike Gerdts Immutable Zones on Encrypted ZFS The IPS System Repository (avec les zones) - par Tim Foster Quelques bonus de la communauté Solaris  Solaris 11 DTrace syscall Provider Changes Solaris 11 - hostmodel (Control send/receive behavior for IP packets on a multi-homed system) A Quick Tour of Oracle Solaris 11 Pour terminer, je vous engage également à consulter ce document de référence fort utile :  Transition from Oracle Solaris 10 to Oracle Solaris 11 Bonne lecture ! Translate in English 

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • About to smash my keyboard!! Ubuntu 13.1 issues with AMD driver & Audio

    - by DNex
    Let me preface with saying that this is my 2nd day on Linux. I really want to make it work but these issues are driving me up the wall! I've done exhaustive google searches but have not been able to figure anything out. I am on Ubuntu 13.10, my graphics card is AMD Radeon HD4200. My sound card is a realtek HDMI. I've tried downloading and installing both drivers but nothing works. Graphics card: When I run the .run file (from http://www2.ati.com/drivers/legacy/amd-driver-installer-catalyst-13.1-legacy-linux-x86.x86_64.zip) I get an error. I check the fglrx-install log and it says this: Check if system has the tools required for installation. fglrx installation requires that the system have kernel headers. /lib/modules/3.11.0-12-generic/build/include/linux/version.h cannot be found on this system. One or more tools required for installation cannot be found on the system. Install the required tools before installing the fglrx driver. Optionally, run the installer with --force option to install without the tools. Forcing install will disable AMD hardware acceleration and may make your system unstable. Not recommended. Audio: Since my first install I've had no audio. I've tried everything outlined in this site: http://itsfoss.com/fix-sound-ubuntu-1304-quick-tip/ to no avail. I've download the linux drivers from Realtek HDMI audio but have had no luck. Any help would be extremely appreciated.

    Read the article

  • Unmet Dependencies with kdelibs5-data

    - by Jitesh
    I was trying to install Amarok 1.4 on Ubuntu 12.04 (Gnome-classic), by following this instructions. Problem started after giving these two commands dpkg -i kdelibs5-data_4.6.2-0ubuntu4_all.deb dpkg -i kdelibs-data_3.5.10.dfsg.1-5ubuntu2_all.deb Now, immediately after these commands, Ubuntu Updater popped up and gave me an error that the package catalog is broken and needs to be repaired. Nothing can be installed or removed till then. It also offered a suggestion to run apt-get install -f. I tried that, but again got the same error.Also tried apt-get clean followed by apt-get install -f. Again got the following output: jitesh@jitesh-desktop:~$ sudo apt-get clean jitesh@jitesh-desktop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: kdelibs5-data The following packages will be upgraded: kdelibs5-data 1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 2 not fully installed or removed. Need to get 2,832 kB of archives. After this operation, 2,998 kB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http: //in.archive.ubuntu.com/ubuntu/precise-updates/main kdelibs5-data all 4:4.8.4a-0ubuntu0.2 [2,832 kB] Fetched 2,832 kB in 32s (86.6 kB/s) dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. No apport report written because MaxReports is reached already dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: kdelibs5-data kdelibs-data W: Duplicate sources.list entry http://archive.canonical.com/ubuntu/ precise/partner i386 Packages (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_precise_partner_binary-i386_Packages) W: You may want to run apt-get update to correct these problems E: Sub-process /usr/bin/dpkg returned an error code (1) As I thought the error was related to configuring kdelibs, I tried to configure using dpkg. But got the following errors: jitesh@jitesh-desktop:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: kdelibs5-data kdelibs-data jitesh@jitesh-desktop:~$ Now I dont have any idea how to proceed. I am unable to install anything from Software Centre or using Terminal now. Some basic info: Core2Duo, dual booting Ubuntu 12.04 with Win7. Fresh install of Ubuntu 12.04 (not upgrade). Incidentally, I had first upgraded from 10.04 and had succesfully installed Amarok 1.4 following this same method. But due to other issues, i had to format and do a clean install of 12.04. Now when I tried to install Amarok 1.4, I'm getting these errors. I also have digiKam and k3b installed, if that can be of any help. I use digiKam a lot, so removing KDE is not feasible for me. Any help on this issue will be highly appreciated. Thanks in advance.

    Read the article

  • Insurers Pushed to Transform Their Business

    - by Calvin Glenn
    Everyone in the P&C industry has heard it “We can’t do it.” “Nobody wants to do it.” “We can’t afford to do it.”  Unfortunately, what they’re referencing are the reasons many insurers are still trying to maintain their business processing on legacy policy administration systems, attempting to bide time until there is no other recourse but to give in, bite the bullet, and take on the monumental task of replacing an entire policy administration system (PAS). Just the thought of that project sends IT, Business Users and Management reeling. However, is that fear real?  It is a bit daunting when one realizes that a complete policy administration system replacement will touch most every function an insurer manages, from quoting and rating, to underwriting, distribution, and even customer service. With that, everyone has heard at least one horror story around a transformation initiative that has far exceeded budget and the promised implementation / go-live timeline.    But, does it have to be that hard?  Surely, in the age where a person can voice-activate their DVR to record a TV program from a cell phone, there has to be someone somewhere who’s figured out how to simplify this process. To be able to help insurers, of all sizes, transform and grow their business while also delivering on their overall objectives of providing speed to market, straight-through-processing for applications, quoting, underwriting, and simplified product development. Maybe we’re looking too hard and the answer is simple and straight-forward. Why replace the entire machine when all it really needs is a new part…a single enterprise rating system? This core, modular piece of the policy administration system is the foundation of product development and rate management that enables insurers to provide the right product at the right price to the right customer through the best channels at any given moment in time. The real benefit of a single enterprise rating system is the ability to deliver enhanced business capabilities, such as improved product management, streamlined underwriting, and speed to market. With these benefits, carriers have accomplished a portion of their overall transformation goal. Furthermore, lessons learned from the rating project can be applied to the bigger, down-the-road PAS project to support the successful completion of the overall transformation endeavor. At the recent Oracle OpenWorld Conference in San Francisco, information was shared with attendees about a recent “go-live” project from an Oracle Insurance Tier 1 insurer who did what is proposed above…replaced just the rating portion of their legacy policy administration system with Oracle Insurance Insbridge Rating and Underwriting.  This change provided the insurer greater flexibility to set rates that better reflect risk while enabling the company to support its market segment strategy. Using the Oracle Insurance Insbridge enterprise rating solution, the insurer was able to reduce processing time for agents and underwriters, gained the ability to support proprietary rating models and improved pricing accuracy.      There is mounting pressure on P&C insurers to produce growth and show net profitability in the midst of modest overall industry growth, large weather-related losses and intensifying competition for market share.  Insurers are also being asked to improve customer service, offer a differentiated value proposition and simplify insurance processes.  While the demands are many there is an easy answer…invest in and update the most mission critical application in your arsenal, the single enterprise rating system. Download the Podcast to listen to “Stand-Alone Rating Engine - Leading Force Behind Core Transformation Projects in the P&C Market,” a podcast originally recorded in October 2013. Related Resources: White Paper: Stand-Alone Rating Engine: Leading Force Behind Core Transformation Projects in the P&C Market Webcast On Demand: Stand-Alone Rating Engine and Core Transformation for P&C Insurers Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • cannot run c compiled programs. if you meant to cross compile use --host' Ubuntu

    - by Ali.A
    I'm completely new to Ubuntu I downloaded and unpacked a tar.gz package, after extraction, it is said in its documentation : "type ./configure --disable-gts" But when i run this command alone it tells me "Permission Denied" error. Then i tried to use "sh ./configure --disable-gts" insted, but this time i faced this error: configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. How can i overcome these two problems? (I mean permission and compile error) I'm just a rookie and i need urgent help. Thanx

    Read the article

  • Starting it back up again

    - by Mickey Gousset
    After a couple of year hiatus from blogging at Geeks With Blogs, I’m back!  I’m still blogging about Visual Studio 2010 and TFS 2010 over at Team System Rocks (soon to undergo some major revisions), so expect to see some cross postings from there. Here though, I expect to focus on System Center technologies (mostly System Center Operations Manager and System Center Service Manager, with some of the others thrown in there too, as that is my day job now..  I’ll also use this blog to start tracking my foray into Windows Phone 7 development.  I’ve decided to go the game programming route first.

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • Change the Integrated Weblogic Port number

    - by pavan.pvj
    There came a situation where I wanted to work with two JDevelopers simultaneously and start two different applications in two JDEVs. (Both of them have to in separate installation location, else it will create a problem because of system directory).Now, when we want to start WLS in JDEV, only the first one will be started and the other one fails with an exception of port conflict. Until few days back, $1million dollar question was how to change the integrated WLS port number?So, heres the answer after some R&D. In the view menu, click on "Application Server Navigator". Right click on Integrated Weblogic server.1) If it is the first time that you are trying to start the server, then there is a menu "Create Default Domain". If you click on this, a window will be displayed where it asks for the preferred port number. Change it here.2) If the domain is already created, then click on Properties and change the preferred port number.Again, if you want to change the port before starting JDEV from the file system, then goto $JDEV_USER_HOME/systemxxx/o.j2ee and open the file adrs-instances.xml and change the http-port in the startup-preferences:<hash n="startup-preferences">   <value n="http-port" v="7111"/></hash>Note 1: adrs-instances.xml will be created ONLY after you create the default domain.Note 2: systemxxx - refers to system.<JDEV version> like system.11.1.1.3.56.59 for PS2.Note 3: $JDEV_USER_HOME - in windows - would be C:\Documents and Settings\[user_name]\Application Data\JDeveloper"Now, you can run multiple Integrated WLS simultaneously. But please be aware that running more than one WLS server will degrade system performance.

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • Log Blog

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Logging – A log blog In a another blog (Missing Fields and Defaults) I spoke about not doing a blog about log files, but then I looked at it again and realized that this is a nice opportunity to show a simple yet powerful tool and also deal with static variables and functions in C#. My log had to be able to answer a few simple logging rules:   To log or not to log? That is the question – Always log! That is the answer  Do we share a log? Even when a file is opened with a minimal lock, it does not share well and performance greatly suffers. So sharing a log is not a good idea. Also, when sharing, it is harder to find your particular entries and you have to establish rules about retention. My recommendation – Do Not Share!  How verbose? Your log can be very verbose – a good thing when testing, very terse – a good thing in day-to-day runs, or somewhere in between. You must be the judge. In my Blog, I elect to always report a run with start and end times, and always report errors. I normally use 5 levels of logging: 4 – write all, 3 – write more, 2 – write some, 1 – write errors and timing, 0 – write none. The code sample below is more general than that. It uses the config file to set the max log level and each call to the log assigns a level to the call itself. If the level is above the .config highest level, the line will not be written. Programmers decide which log belongs to which level and thus we can set the .config differently for production and testing.  Where do I keep the log? If your career is important to you, discuss this with the boss and with the system admin. We keep logs in the L: drive of our server and make sure that we have a directory for each app that needs a log. When adding a new app, add a new directory. The default location for the log is also found in the .config file Print One or Many? There are two options here:   1.     Print many, Open but once once – you start the stream and close it only when the program ends. This is what you can do when you perform in “batch” mode like in a console app or a stsadm extension.The advantage to this is that starting a closing a stream is expensive and time consuming and because we use a unique file, keeping it open for a long time does not cause contention problems. 2.     Print one entry at a time or Open many – every time you write a line, you start the stream, write to it and close it. This work for event receivers, feature receivers, and web parts. Here scalability requires us to create objects on the fly and get rid of them as soon as possible.  A default value of the onceOrMany resides in the .config.  All of the above applies to any windows or web application, not just SharePoint.  So as usual, here is a routine that does it all, and a few simple functions that call it for a variety of purposes.   So without further ado, here is app.config  <?xml version="1.0" encoding="utf-8" ?> <configuration>     <configSections>         <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, ublicKeyToken=b77a5c561934e089" >         <section name="statics.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />         </sectionGroup>     </configSections>     <applicationSettings>         <statics.Properties.Settings>             <setting name="oneOrMany" serializeAs="String">                 <value>False</value>             </setting>             <setting name="logURI" serializeAs="String">                 <value>C:\staticLog.txt</value>             </setting>             <setting name="highestLevel" serializeAs="String">                 <value>2</value>             </setting>         </statics.Properties.Settings>     </applicationSettings> </configuration>   And now the code:  In order to persist the variables between calls and also to be able to persist (or not to persist) the log file itself, I created an EventLog class with static variables and functions. Static functions do not need an instance of the class in order to work. If you ever wondered why our Main function is static, the answer is that something needs to run before instantiation so that other objects may be instantiated, and this is what the “static” Main does. The various logging functions and variables are created as static because they do not need instantiation and as a fringe benefit they remain un-destroyed between calls. The Main function here is just used for testing. Note that it does not instantiate anything, just uses the log functions. This is possible because the functions are static. Also note that the function calls are of the form: Class.Function.  using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace statics {       class Program     {         static void Main(string[] args)         {             //write a single line             EventLog.LogEvents("ha ha", 3, "C:\\hahafile.txt", 4, true, false);             //this single line will not be written because the msgLevel is too high             EventLog.LogEvents("baba", 3, "C:\\babafile.txt", 2, true, false);             //The next 4 lines will be written in succession - no closing             EventLog.LogLine("blah blah", 1);             EventLog.LogLine("da da", 1);             EventLog.LogLine("ma ma", 1);             EventLog.LogLine("lah lah", 1);             EventLog.CloseLog(); // log will close             //now with specific functions             EventLog.LogSingleLine("one line", 1);             //this is just a test, the log is already closed             EventLog.CloseLog();         }     }     public class EventLog     {         public static string logURI = Properties.Settings.Default.logURI;         public static bool isOneLine = Properties.Settings.Default.oneOrMany;         public static bool isOpen = false;         public static int highestLevel = Properties.Settings.Default.highestLevel;         public static StreamWriter sw;         /// <summary>         /// the program will "print" the msg into the log         /// unless msgLevel is > msgLimit         /// onceOrMany is true when once - the program will open the log         /// print the msg and close the log. False when many the program will         /// keep the log open until close = true         /// normally all the arguments will come from the app.config         /// called by many overloads of logLine         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         /// <param name="logFileName"></param>         /// <param name="msgLimit"></param>         /// <param name="onceOrMany"></param>         /// <param name="close"></param>         public static void LogEvents(string msg, int msgLevel, string logFileName, int msgLimit, bool oneOrMany, bool close)         {             //to print or not to print             if (msgLevel <= msgLimit)             {                 //open the file. from the argument (logFileName) or from the config (logURI)                 if (!isOpen)                 {                     string logFile = logFileName;                     if (logFileName == "")                     {                         logFile = logURI;                     }                     sw = new StreamWriter(logFile, true);                     sw.WriteLine("Started At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     isOpen = true;                 }                 //print                 sw.WriteLine(msg);             }             //close when instructed             if (close || oneOrMany)             {                 if (isOpen)                 {                     sw.WriteLine("Ended At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     sw.Close();                     isOpen = false;                 }             }         }           /// <summary>         /// The simplest, just msg and level         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogLine(string msg, int msgLevel)         {             //use the given msg and msgLevel and all others are defaults             LogEvents(msg, msgLevel, "", highestLevel, isOneLine, false);         }                 /// <summary>         /// one line at a time - open print close         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogSingleLine(string msg, int msgLevel)         {             LogEvents(msg, msgLevel, "", highestLevel, true, true);         }           /// <summary>         /// used to close. high level, low limit, once and close are set         /// </summary>         /// <param name="close"></param>         public static void CloseLog()         {             LogEvents("", 15, "", 1, true, true);         }           }     }   }   That’s all folks!

    Read the article

  • Ubuntu impossible to install: "unable to find a medium containing a live filesystem"

    - by Lorenzo
    Yes, I have read questions and answers with similar titles for this issue, which prevented me from installing Ubuntu for several MONTHS now, trying to figure it out. I have a MacBookPro with triple partition (one for Mac Snow Leopard, one for Windows 7, one for Linux) created with reFIT firmware (not BootCamp). I set up the system according to these instructions for reFIT: http://lifehacker.com/5531037/how-to-triple+boot-your-mac-with-windows-and-linux-no-boot-camp-required Now. There is a free partition ready to accept Linux into its arms, but Linux does not want to participate. Most answers to the issue "unable to find a medium containing a live filesystem" point to changing the BIOS booting system (which I don't know how to do, especially using this reFIT booting system), and to changing the socket of the USB (which does not concern me, since I am using a CD, actually I tried with a CD, then with a DVD, since a blank CD is only 700MB while the iso image file of Ubuntu is about 731MB). Anyway. This is what happens: I am in the Mac system (using the Mac partition) I insert the DVD with the burned image of Ubuntu (yes I have tried burning it again and agin on both CD and DVD blank discs). I restart the computer. When reFIT loads, I hold down the ALT key until the CD image appears. I select it, and hit Enter. A small Ubuntu icon appears at the bottom of the screen. Then a Ubuntu sign appears in the middle of the screen with small dots underneath, lighting up progressively over and over to indicate it's loading. Then everything turns black and the following message appears, at the end of a few lines of text: "unable to find medium with live file system". Please provide very practical suggestions on what to do to an unexperienced wannabe Ubuntu very patient user. Please start by saying how do I access the BIOS setup from reFIT bootup, and exactly what and why I need to try and change. (Will this mess up my reFIT bootup?) And anything else I need to do to finally be able to install Ubuntu. Thanks

    Read the article

  • How to configure 2 lan cards?

    - by Gurupal singh
    I have Ubuntu 14.04 installed on my system and i make it as a server in my office with having 15 employees working.I have 2 Lan cards in my system, one for input from router to my ubuntu server and other from my system to switch, which connects all my employees from through that switch via lan cables.Now, how can i configure my both LAN cards, as i wants to block some sites and make some restriction on the network. Please help me out. It's really a big problem for me. Thanks.

    Read the article

  • GPU hung when switching graphic card

    - by Lie Ryan
    I have a laptop (Dell Inspiron N4110) with a switchable graphic. $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Whistler [AMD Radeon HD 6600M Series] (rev ff) Normally, my laptop starts with both graphic cards enabled, which caused the laptop to turn very hot and the fan to become very noisy. I have been using a small script to disable the Radeon card. For some time, I'm quite happy with this arrangement. However, I have been having some issues with the Intel card (IGD), the Intel card often randomly hang when running OpenGL apps; and so I want to give the Radeon card (DIS) another chance. I have never been able to switch to the Radeon card, but recently, I found out that if I do a "delayed switching" (DDIS): # echo "DDIS" > /sys/kernel/debug/vgaswitcheroo/switch root@lieryan-dell-ubuntu:/sys/kernel/debug/vgaswitcheroo# cat switch 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :Pwr:0000:01:00.0 then I logoff (i.e. to restart X), the screen switch to pseudo-tty and then it stuck there freezing. At this situation, mouse and keyboard stops working so I can't switch to another ptty. I tried ssh-ing from another computer to salvage logs (dmesg at that point) and whatnot; I found out that when freezing, the active graphic card is the AMD card: -- this is from ssh -- # cat switch 0:IGD: :Off:0000:00:02.0 1:DIS:+:Pwr:0000:01:00.0 but the GPU is apparently hung, looking at dmesg gives: ... [ 1411.649974] vga_switcheroo: client 0 refused switch [ 1411.649985] vga_switcheroo: setting delayed switch to client 1 [ 1423.911759] vga_switcheroo: processing delayed switch to 1 [ 1424.006564] fbcon: Remapping primary device, fb1, to tty 1-63 [ 1424.006799] i915: switched off [ 1424.840351] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1425.718088] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1426.622377] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1427.355683] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1428.193549] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id ... the invalid framebuffer id error is repeated for many times over ... I were able to successfully recover by switching back to the Intel card and restarting X from ssh; indicating that only the Radeon card has problems switching. System info: $ uname -a Linux lieryan-dell-ubuntu 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 20:28:43 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric The laptop also do not have the option to set graphic card at BIOS and the proprietary driver, fglrx, also have never worked; when I installed it through jockey ("Additional Drivers"), glxinfo showed that it still being rendered by Mesa, the /sys/kernel/debug/vgaswitcheroo directory has gone missing, and the driver crashes with a traceback if I use xorg.conf to tell X to use fglrx. Anyone had any idea if it is possible to use this AMD card either with the radeon or the fglrx driver? logs: dmesg

    Read the article

  • Cannot delete apt-fast for a clean install

    - by colby
    This is my problem: $ destroy apt-fast [sudo] password for colbyryptos: Reading package lists... Done Building dependency tree Reading state information... Done Package apt-fast is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up man-db (2.6.1-2) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing man-db (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: man-db E: Sub-process /usr/bin/dpkg returned an error code (1) I have also tried sudo rm /var/lib/dpkg/lock, followed by sudo dpkg --configure -a. It then gives me this $ sudo dpkg --configure -a [sudo] password for colbyryptos: Setting up man-db (2.6.1-2) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing man-db (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: man-db

    Read the article

  • Passing javascript array of objects to WebService

    - by Yousef_Jadallah
    Hi folks. In the topic I will illustrate how to pass array of objects to WebService and how to deal with it in your WebService.   suppose we have this javascript code :  <script language="javascript" type="text/javascript"> var people = new Array(); function person(playerID, playerName, playerPPD) { this.PlayerID = playerID; this.PlayerName = playerName; this.PlayerPPD = parseFloat(playerPPD); } function saveSignup() { addSomeSampleInfo(); WebService.SaveSignups(people, SucceededCallback); } function SucceededCallback(result, eventArgs) { var RsltElem = document.getElementById("divStatusMessage"); RsltElem.innerHTML = result; } function OnError(error) { alert("Service Error: " + error.get_message()); } function addSomeSampleInfo() { people[people.length++] = new person(123, "Person 1 Name", 10); people[people.length++] = new person(234, "Person 2 Name", 20); people[people.length++] = new person(345, "Person 3 Name", 10.5); } </script> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } poeple :is the array that we want to send to the WebService. person :The function –constructor- that we are using to create object to our array. SucceededCallback : This is the callback function invoked if the Web service succeeded. OnError : this is the Error callback function so any errors that occur when the Web Service is called will trigger this function. saveSignup : This function used to call the WebSercie Method (SaveSignups), the first parameter that we pass to the WebService and the second is the name of the callback function.   Here is the body of the Page :<body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> <Services> <asp:ServiceReference Path="WebService.asmx" /> </Services> </asp:ScriptManager> <input type="button" id="btn1" onclick="saveSignup()" value="Click" /> <div id="divStatusMessage"> </div> </form> </body> </html> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }     Then main thing is the ServiceReference and it’s path "WebService.asmx” , this is the Web Service that we are using in this example.     A web service will be used to receive the javascript array and handle it in our code :using System; using System.Web; using System.Web.Services; using System.Xml; using System.Web.Services.Protocols; using System.Web.Script.Services; using System.Data.SqlClient; using System.Collections.Generic; [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ScriptService] public class WebService : System.Web.Services.WebService { [WebMethod] public string SaveSignups(object [] values) { string strOutput=""; string PlayerID="", PlayerName="", PlayerPPD=""; foreach (object value in values) { Dictionary<string, object> dicValues = new Dictionary<string, object>(); dicValues = (Dictionary<string, object>)value; PlayerID = dicValues["PlayerID"].ToString(); PlayerName = dicValues["PlayerName"].ToString(); PlayerPPD = dicValues["PlayerPPD"].ToString(); strOutput += "PlayerID = " + PlayerID + ", PlayerName=" + PlayerName + ",PlayerPPD= " + PlayerPPD +"<br>"; } return strOutput; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The first thing I implement System.Collections.Generic Namespace, we need it to use the Dictionary Class. you can find in this code that I pass the javascript objects to array of object called values, then we need to deal with every separate Object and implicit it to Dictionary<string, object> . The Dictionary Represents a collection of keys and values Dictionary<TKey, TValue> TKey : The type of the keys in the dictionary TValue : The type of the values in the dictionary. For more information about Dictionary check this link : http://msdn.microsoft.com/en-us/library/xfhwa508(VS.80).aspx   Now we can get the value for every element because we have mapping from a set of keys to a set of values, the keys of this example is :  PlayerID ,PlayerName,PlayerPPD, this created from the original object person.    Ultimately,this Web method return the values as string, but the main idea of this method to show you how to deal with array of object and convert it to  Dictionary<string, object> object , and get the values of this Dictionary.   Hope this helps,

    Read the article

  • Can't mount windows partition?

    - by C.J.
    When I try to open the Windows Partition from Ubuntu I receive the error: Unable to mount 55 GB Filesystem Error mounting: mount exited without exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x04010400 size: 1024 usa_ofs: 1026 usa_count: 1026: Invalid argument Record 6 has no FILE magic (0x4010400) Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sda2': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more detail. Additionally, I can't open the Windows Partition. I've tried updating it many times but it won't show up on GRUB. Does anybody know what all this means? And how I might fix it? I thank you for any help in advance

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • Computer not acknowledging sound card after move

    - by Angela
    We just got our computer back from having moved and when I tried to hook our speaker system, the computer keeps telling me that I don't have a sound system installed. It's a Dell computer with the original surround/subwoofer system that it came with and was working fine before the move. I tried buying a new speaker system (Logitech) to see if it would work, but still saying I have no audio device installed and it's not showing up in my device manager. I opened up the computer and unplugged the sound card, then plugged it back in, still nothing. I put the install disc from the sound card in the cd-rom, but it's saying that I have no sound device hardware in my system.

    Read the article

  • Can't get Ubuntu 11.10 working on my VirtualBox running on Mac OsX 10.6.8

    - by stack-o-frankie
    I installed the Guest Additions, installed the isight-firmware-tools by using the AppleUSBVideoSupport file but I still can't get access to the iSight webcam. When I launch vlc v4l2:///dev/video0 I get the following errors: Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS") Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE") [0x92d492c] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. Blocked: call to setlocale(6, "") Blocked: call to setlocale(6, "") (process:2922): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. (vlc:2922): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (vlc:2922): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (vlc:2922): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (vlc:2922): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", [0x963287c] v4l2 demux error: VIDIOC_STREAMON failed [0x963287c] v4l2 demux error: cannot set input (Device or resource busy) [0x96430a4] v4l2 access error: VIDIOC_STREAMON failed [0x96430a4] v4l2 access error: cannot set input (Device or resource busy) [0x9371104] main input error: open of `v4l2:///dev/video0' failed: (null) Any clue?

    Read the article

< Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >