Search Results

Search found 123284 results on 4932 pages for 'error code'.

Page 220/4932 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Access Denied Error in PagesListCPVEventReceiver post SharePoint SP2 upgrade

    - by Jeff
    I am seeing the following errors from one of the SharePoint Web Front Ends after the SP2 upgrade. Has anyone else seen this error or a solution? Event Type: Error Event Source: Windows SharePoint Services 3 Event Category: General Event ID: 6875 Date: 2009-10-27 Time: 13:09:57 User: N/A Computer: XXXXXXX Description: Error loading and running event receiver Microsoft.SharePoint.Publishing.PagesListCPVEventReceiver in Microsoft.SharePoint.Publishing, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c. Additional information is below. : Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

  • Google Chrome giving error 138

    - by gsingh2011
    Google Chrome randomly stopped working one day and is giving me this error: Google Chrome is having trouble accessing the network. This may be because your firewall or antivirus software wrongly thinks that Google Chrome is an intruder on your computer and is blocking it from connecting to the Internet. Here are some suggestions: Add Google Chrome as a permitted programme in your firewall or antivirus software's settings. If it is already a permitted programme, try deleting it from the list of permitted programmes and adding it again. Error 138 (net::ERR_NETWORK_ACCESS_DENIED): Unable to access the network. I didn't make any changes to my firewall settings between the time it was working and when it wasn't working. I'm using the default Windows Firewall. I added Chrome to the allowed programs and restarted, but that didn't fix the error. I even reinstalled Chrome completely and that didn't work either. Any help would be appreciated. EDIT: I forgot to mention that Firefox and IE9 work fine.

    Read the article

  • Cannot mount USB disks due to "not authorized" error

    - by shadovv
    I have Ubuntu 12.04 (.2?) installed. I don't plan to need GUI after setup so I modified the grub to start in console. However, whenever I want to start up GUI I need to type startx. Simple/basic stuff. However I seem to be hitting a snag: I am having trouble mounting/accessing USB flash drives in the GUI I started in the console. "Unable to mount location - not authorized". Seemed like it should be an easy fix, but can't figure out how I'm overlooking it. Can someone help me out?

    Read the article

  • Looking for a code Plugin !!

    - by GrumpyOldDBA
    SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON GO IF NOT EXISTS( SELECT 1 FROM INFORMATION_SCHEMA . TABLES WHERE TABLE_SCHEMA = 'dbo' AND TABLE_NAME = 'MSPaymentForExtraction' ) BEGIN CREATE TABLE [dbo] . [MSPaymentForExtraction] ( [MSPaymentID] [int] NOT NULL IDENTITY ( 1 , 1 ) NOT FOR REPLICATION ) ON [PRIMARY] END GO...(read more)

    Read the article

  • Hopping/Tumbling Windows Could Introduce Latency.

    This is a pre-article to one I am going to be writing on adjusting an event’s time and duration to satisfy business process requirements but it is one that I think is really useful when understanding the way that Hopping/Tumbling windows work within StreamInsight.  A Tumbling window is just a special shortcut version of  a Hopping window where the width of the window is equal to the size of the hop Here is the simplest and often used definition for a Hopping Window.  You can find them all here public static CepWindowStream<CepWindow<TPayload>> HoppingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     TimeSpan hopSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   And here is the definition for a Tumbling Window public static CepWindowStream<CepWindow<TPayload>> TumblingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   These methods allow you to group events into windows of a temporal size.  It is a really useful and simple feature in StreamInsight.  One of the downsides though is that the windows cannot be flushed until an event in a following window occurs.  This means that you will potentially never see some events or see them with a delay.  Let me explain. Remember that a stream is a potentially unbounded sequence of events. Events in StreamInsight are given a StartTime.  It is this StartTime that is used to calculate into which temporal window an event falls.  It is best practice to assign a timestamp from the source system and not one from the system clock on the processing server.  StreamInsight cannot know when a window is over.  It cannot tell whether you have received all events in the window or whether some events have been delayed which means that StreamInsight cannot flush the stream for you.   Imagine you have events with the following Timestamps 12:10:10 PM 12:10:20 PM 12:10:35 PM 12:10:45 PM 11:59:59 PM And imagine that you have defined a 1 minute Tumbling Window over this stream using the following syntax var HoppingStream = from shift in inputStream.TumblingWindow(TimeSpan.FromMinutes(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new WindowCountPayload { CountInWindow = (Int32)shift.Count() };   The events between 12:10:10 PM and 12:10:45 PM will not be seen until the event at 11:59:59 PM arrives.  This could be a real problem if you need to react to windows promptly This can always be worked around by using a different design pattern but a lot of the examples I see assume there is a constant, very frequent stream of events resulting in windows always being flushed. Further examples of using windowing in StreamInsight can be found here

    Read the article

  • What should NOT be included in comments? (opinion on a dictum by the inventor of Forth)

    - by AKE
    The often provocative Chuck Moore (inventor of the Forth language) gave the following advice (paraphrasing): "Use comments sparingly. Programs are self-documenting, with a modicum of help from mnemonics. Comments should say WHAT the program is doing, not HOW." My question: Should comments say WHY the program is doing what it is doing? Update: In addition to the answers below, these two provide additional insight. 1: Beginner's guide to writing comments? 2: http://programmers.stackexchange.com/a/98609/62203

    Read the article

  • What happened to the this type of naming convention?

    - by Smith
    I have read so many docs about naming conventions, most recommending both Pascal and Camel naming conventions. Well, I agree to this, it's ok. This might not be pleasing to some, but I am just trying to get your opinion on why you name your objects and classes in a certain way. What happened to this type of naming conventions, and/or why are they bad? I want to name a structure, and I prefix it with "struct". My reason is that, with IntelliSense, I see all structures in one place, and anywhere I see the struct prefix, I know it's a "struct": structPerson structPosition another example is the enum, although I may not prefix it with "enum", but maybe with "enm": enmFruits enmSex again my reason is that in IntelliSense, I see all my enumerations in one place. Because .NET has so many built-in data structures, I think this helps me do less searching. Note that I used .NET in this example, but I welcome language agnostic answers.

    Read the article

  • Attempting to GREP details of a Java error

    - by BOMEz
    I'm running Ubuntu 11 and I'm having some issues with grep. I have a shell script (see below) which essentially checks if a certain Java program of mine is running, if not it runs it. That part works out great! If my Java application throws any kind of exception however I would like to capture that information and email it to myself. How can I go about checking to see if the call to java -jar /bin/MyApp.jar fails? I tried piping it to grep, but that doesn't seem to work. Below is the full script that I've written: #Check if MyApp.jar is running, if not run it. if [ $(ps aux | grep 'java' | grep -v grep | wc -l | tr -s "\n") -eq 0 ] then echo "PacketCapture Starting...\n" java -jar /bin/MyApp.jar echo "PacketCapture Started.\n" else echo "PacketCapture already running.\n" fi

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • Synchronise graphics and logic code

    - by Skeith
    I have a procedural approach to the game loop that runs various classes. it looks like this: continue any in progress animations check for used input apply AI move things resolve events such as collisions draw it all to screen I have seen a lot of posts about how drawing should be running separately as fast as it can, possibly in another thread. My problem is that if the drawing runs as fast as it, can what happens if it tried to draw while I'm still applying the AI or resolving a collision? It could draw the wrong thing on screen. This seems to be a well established idea so there must be an explanation to this problem as I just cant get my head around it. The only solution I have is to update the screen so fast that any errors like that get refreshed before we see them but that sounds hacky. So how does this work / how would you implement it so that they are in sync but running at different speeds?

    Read the article

  • What is best practice for search engines when a website is under maintenance?

    - by jamescridland
    I need around a week to transition a heavily data-driven website from one back end to another. During that time I do plan to attempt to keep some pages live, but they won't all work well or look brilliant. Some pages won't work at all. What is the best way to ensure I don't scare Google? Should I hide everything from robots.txt, or mark everything that doesn't work as "503", or are there other things that I should be considering?

    Read the article

  • System checks for disk drive error every time it boots

    - by Starx
    When my disk space for the ubuntu installation partition was getting low, from a live cd, I used gparted to increase its volume capacity, but deleting another partition and merging it to the ubuntu partition. Since then onwards, I am receiving disk checking for errors at boot screen for my partitions, always. What seem to be causing this and how to fix it? Update Here is my boot.log if it provides few insight fsck from util-linux 2.19.1 fsck from util-linux 2.19.1 /dev/sda1 was not cleanly unmounted, check forced. ubuntu: clean, 501325/1310720 files, 2958455/5242880 blocks /dev/sda1: 241/51272 files (3.3% non-contiguous), 73541/102400 blocks mountall: fsck /boot [358] terminated with status 1 Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox ... /dev/sda1 is a separate grub partition for my dual OS's

    Read the article

  • Recovering SQL Server 2008 Database From Error 2008

    MS SQL Server 2008 is the latest version of SQL Sever. It has been designed with the SQL Server Always On technologies that minimize the downtime and maintain appropriate levels of application availa... [Author: Mark Willium - Computers and Internet - May 13, 2010]

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • Windows 8 blue screen error watchdog violation

    - by Pramodh
    My system is windows 8 pro rtm x64. Recently my system started hanging frequently. Nothing but the restart button works. But now just after hanging it showed me a BSOD error saying watchdog violation. I used bluscreen viewer to see the error in the memory dump and it showed me something about hal.dll, ntoskrnl.exe. This is what windows showed me. Problem signature Problem Event Name: BlueScreen OS Version: 6.2.9200.2.0.0.256.48 Locale ID: 1033 Extra information about the problem BCCode: 133 BCP1: 0000000000000000 BCP2: 0000000000000281 BCP3: 0000000000000280 BCP4: 0000000000000000 OS Version: 6_2_9200 Service Pack: 0_0 Product: 256_1 Bucket ID: 0x133_DPC_NETIO!KfdClassify Server information: c03a9f52-f2b5-483f-9b4a-cbb5be3a72c0 Can anyone please walk me through the steps to get rid of this error please? Thank you.

    Read the article

  • Making diff output more readable

    - by mgunes
    I'm looking for a tool that will take diff / debdiff output (and more specifically, the output of this script) and display the result of the comparison in a highly readable, graphical way. Any pointers would be appreciated. Ideally, it would be the GTK+, FOSS equivalent of MDR. Meld, Diffuse and similar software are not fit for this purpose, since they're intended to work standalone, and don't take input from stdin.

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Write SQL Code for MySQL Using HeidiSQL 4

    HeidiSQL is a free GUI client for MySQL, favored by many Web developers and database administrators of small to medium-sized businesses to manage persistent storage of data. This article will familiarize you with HeidiSQL&#146;s Query editor by using it to write a query that will join four tables together to perform searches against a help library.

    Read the article

  • Exclude pings from apache error logs (ran from PHP exec)

    - by fooraide
    Now, for a number of reasons I need to ping several hosts on a regular basis for a dashboard display. I use this PHP function to do it: function PingHost($strIpAddr) { exec(escapeshellcmd('ping -q -W 1 -c 1 '.$strIpAddr), $dataresult, $returnvar); if (substr($dataresult[4],0,3) == "rtt") { //We got a ping result, lets parse it. $arr = explode("/",$dataresult[4]); return ereg_replace(" ms","",$arr[4]); } elseif (substr($dataresult[3],35,16) == "100% packet loss") { //Host is down! return "Down"; } elseif ($returnvar == "2") { return "No DNS"; } } The problem is that whenever there is an unknown host, I will get an error logged to my apache error log (/var/log/apache/error.log). How would I go about disabling logs for this particular function ? Disabling logs in the vhost is not an option since logs for that vhost are relevant, just not the pings. Thanks,

    Read the article

  • Tracking 502 bad gateway error

    - by dasickle
    I moved my Wordpress site to WP Engine and now I constantly get 502 errors. I spoke with support and they said that its because I have a lot of DB queries. I ran some tests and my frontpage only has 95 queries and page size is about 500kb. Most inner pages are around 60 queries. All queries are very short. Some people tell me its common with WP Engine because they run nginx. Why do I keep getting these errors and is there a way to track how many of them happen on daily basis? P.S. WP Engine log is empty so cant see the 502's there.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >