Search Results

Search found 3865 results on 155 pages for 'removing whitespace'.

Page 146/155 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • Why does Google mark one e-mail as spam while does not the other?

    - by nKn
    I've a Postfix installation which works fine, I don't get any trouble with mails sent through a mail client (in my case, Thunderbird or RoundCube) when the To: address is a GMail account. However, I recently needed to use the PHPMailer tool to send some e-mails to some GMail accounts, so I configured an account to be used via SASL authentication + TLS. I don't mean mass mailing, just 2-3 mails. If I send the e-mail from the Thunderbird or RoundCube clients, the mail is not marked as spam. However, if I use PHPMailer, it always gets catalogued as spam. So I compared both headers and I just can't find the reason why the second is marked as spam while the first one is just ok. The first header sent from a mail client which is not marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230573oab; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) X-Received: by 10.60.23.39 with SMTP id j7mr45544050oef.20.1408471699715; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id t5si27115082oej.10.2014.08.19.11.08.18 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id D8F69120293D; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from [192.168.1.111] (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id 910341202624 for <[email protected]>; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= Message-ID: <[email protected]> Date: Tue, 19 Aug 2014 19:08:24 +0100 From: My Name <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: My other account <[email protected]> Subject: . Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit . The second header sent from PHPMailer which is always marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230832oab; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) X-Received: by 10.60.121.67 with SMTP id li3mr44086252oeb.17.1408471930520; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id w8si27103806obn.30.2014.08.19.11.12.10 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id 1999D120293D; Tue, 19 Aug 2014 19:12:09 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471929; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=K7tcPyArzSTY91VEw6mAAFtDurSGwgTLGkfUZdC5mqsg0g/1LzmZkgwdjj4NdJa6M E2kDz3dwYN8FcZmbampJYFXxj4NQVtSnzjiWV40rpfOFqD2rXDGNIyB2QOjBZZ4WK3 7s4lyoJ/BrdQH4en8ctLVsDHed/KpHD4iGFEl67E= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from rpi.mydomain.com (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id B42AF1202624 for <[email protected]>; Tue, 19 Aug 2014 19:12:08 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471928; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=iXPM0tS36swudPTT4FOHHtPi5Ll6LbR60kNqCinZ8utcWoFE31SFTpoMEq5aCM5ux wQMdFiN8c6vkjRGabmvqFTTIbwJsrToHo/4+Lt5HEBoQQE2Y3T+xGmnmGAHCS6stKB yb7SVmtrIAsVtSMKA8VYIbmu2oYqV3afYt7g0OMQ= Date: Tue, 19 Aug 2014 20:12:07 +0200 To: [email protected] From: Trying another account <[email protected]> Reply-to: Trying another account <[email protected]> Subject: . Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="UTF-8" . I also tried: Adding a User-Agent header to match the first one. Removing the X-Mailer header. No one of them made a difference. Is there some significant difference which is making the second e-mail to be marked as spam by Google?

    Read the article

  • How to Recover From a Virus Infection: 3 Things You Need to Do

    - by Chris Hoffman
    If your computer becomes infected with a virus or another piece of malware, removing the malware from your computer is only the first step. There’s more you need to do to ensure you’re secure. Note that not every antivirus alert is an actual infection. If your antivirus program catches a virus before it ever gets a chance to run on your computer, you’re safe. If it catches the malware later, you have a bigger problem. Change Your Passwords You’ve probably used your computer to log into your email, online banking websites, and other important accounts. Assuming you had malware on your computer, the malware could have logged your passwords and uploaded them to a malicious third party. With just your email account, the third party could reset your passwords on other websites and gain access to almost any of your online accounts. To prevent this, you’ll want to change the passwords for your important accounts — email, online banking, and whatever other important accounts you’ve logged into from the infected computer. You should probably use another computer that you know is clean to change the passwords, just to be safe. When changing your passwords, consider using a password manager to keep track of strong, unique passwords and two-factor authentication to prevent people from logging into your important accounts even if they know your password. This will help protect you in the future. Ensure the Malware Is Actually Removed Once malware gets access to your computer and starts running, it has the ability to do many more nasty things to your computer. For example, some malware may install rootkit software and attempt to hide itself from the system. Many types of Trojans also “open the floodgates” after they’re running, downloading many different types of malware from malicious web servers to the local system. In other words, if your computer was infected, you’ll want to take extra precautions. You shouldn’t assume it’s clean just because your antivirus removed what it found. It’s probably a good idea to scan your computer with multiple antivirus products to ensure maximum detection. You may also want to run a bootable antivirus program, which runs outside of Windows. Such bootable antivirus programs will be able to detect rootkits that hide themselves from Windows and even the software running within Windows. avast! offers the ability to quickly create a bootable CD or USB drive for scanning, as do many other antivirus programs. You may also want to reinstall Windows (or use the Refresh feature on Windows 8) to get your computer back to a clean state. This is more time-consuming, especially if you don’t have good backups and can’t get back up and running quickly, but this is the only way you can have 100% confidence that your Windows system isn’t infected. It’s all a matter of how paranoid you want to be. Figure Out How the Malware Arrived If your computer became infected, the malware must have arrived somehow. You’ll want to examine your computer’s security and your habits to prevent more malware from slipping through in the same way. Windows is complex. For example, there are over 50 different types of potentially dangerous file extensions that can contain malware to keep track of. We’ve tried to cover many of the most important security practices you should be following, but here are some of the more important questions to ask: Are you using an antivirus? – If you don’t have an antivirus installed, you should. If you have Microsoft Security Essentials (known as Windows Defender on Windows 8), you may want to switch to a different antivirus like the free version of avast!. Microsoft’s antivirus product has been doing very poorly in tests. Do you have Java installed? – Java is a huge source of security problems. The majority of computers on the Internet have an out-of-date, vulnerable version of Java installed, which would allow malicious websites to install malware on your computer. If you have Java installed, uninstall it. If you actually need Java for something (like Minecraft), at least disable the Java browser plugin. If you’re not sure whether you need Java, you probably don’t. Are any browser plugins out-of-date? – Visit Mozilla’s Plugin Check website (yes, it also works in other browsers, not just Firefox) and see if you have any critically vulnerable plugins installed. If you do, ensure you update them — or uninstall them. You probably don’t need older plugins like QuickTime or RealPlayer installed on your computer, although Flash is still widely used. Are your web browser and operating system set to automatically update? – You should be installing updates for Windows via Windows Update when they appear. Modern web browsers are set to automatically update, so they should be fine — unless you went out of your way to disable automatic updates. Using out-of-date web browsers and Windows versions is dangerous. Are you being careful about what you run? – Watch out when downloading software to ensure you don’t accidentally click sketchy advertisements and download harmful software. Avoid pirated software that may be full of malware. Don’t run programs from email attachments. Be careful about what you run and where you get it from in general. If you can’t figure out how the malware arrived because everything looks okay, there’s not much more you can do. Just try to follow proper security practices. You may also want to keep an extra-close eye on your credit card statement for a while if you did any online-shopping recently. As so much malware is now related to organized crime, credit card numbers are a popular target.     

    Read the article

  • Graphics.MeasureCharacterRanges giving wrong size calculations in C#.Net?

    - by Owen Blacker
    I'm trying to render some text into a specific part of an image in a Web Forms app. The text will be user entered, so I want to vary the font size to make sure it fits within the bounding box. I have code that was doing this fine on my proof-of-concept implementation, but I'm now trying it against the assets from the designer, which are larger, and I'm getting some odd results. I'm running the size calculation as follows: StringFormat fmt = new StringFormat(); fmt.Alignment = StringAlignment.Center; fmt.LineAlignment = StringAlignment.Near; fmt.FormatFlags = StringFormatFlags.NoClip; fmt.Trimming = StringTrimming.None; int size = __startingSize; Font font = __fonts.GetFontBySize(size); while (GetStringBounds(text, font, fmt).IsLargerThan(__textBoundingBox)) { context.Trace.Write("MyHandler.ProcessRequest", "Decrementing font size to " + size + ", as size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); size--; if (size < __minimumSize) { break; } font = __fonts.GetFontBySize(size); } context.Trace.Write("MyHandler.ProcessRequest", "Writing " + text + " in " + font.FontFamily.Name + " at " + font.SizeInPoints + "pt, size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); I then use the following line to render the text onto an image I'm pulling from the filesystem: g.DrawString(text, font, __brush, __textBoundingBox, fmt); where: __fonts is a PrivateFontCollection, PrivateFontCollection.GetFontBySize is an extension method that returns a FontFamily RectangleF __textBoundingBox = new RectangleF(150, 110, 212, 64); int __minimumSize = 8; int __startingSize = 48; Brush __brush = Brushes.White; int size starts out at 48 and decrements within that loop Graphics g has SmoothingMode.AntiAlias and TextRenderingHint.AntiAlias set context is a System.Web.HttpContext (this is an excerpt from the ProcessRequest method of an IHttpHandler) The other methods are: private static RectangleF GetStringBounds(string text, Font font, StringFormat fmt) { CharacterRange[] range = { new CharacterRange(0, text.Length) }; StringFormat myFormat = fmt.Clone() as StringFormat; myFormat.SetMeasurableCharacterRanges(range); using (Graphics g = Graphics.FromImage(new Bitmap( (int) __textBoundingBox.Width - 1, (int) __textBoundingBox.Height - 1))) { g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias; g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; Region[] regions = g.MeasureCharacterRanges(text, font, __textBoundingBox, myFormat); return regions[0].GetBounds(g); } } public static string Size(this RectangleF rect) { return rect.Width + "×" + rect.Height; } public static bool IsLargerThan(this RectangleF a, RectangleF b) { return (a.Width > b.Width) || (a.Height > b.Height); } Now I have two problems. Firstly, the text sometimes insists on wrapping by inserting a line-break within a word, when it should just fail to fit and cause the while loop to decrement again. I can't see why it is that Graphics.MeasureCharacterRanges thinks that this fits within the box when it shouldn't be word-wrapping within a word. This behaviour is exhibited irrespective of the character set used (I get it in Latin alphabet words, as well as other parts of the Unicode range, like Cyrillic, Greek, Georgian and Armenian). Is there some setting I should be using to force Graphics.MeasureCharacterRanges only to be word-wrapping at whitespace characters (or hyphens)? This first problem is the same as post 2499067. Secondly, in scaling up to the new image and font size, Graphics.MeasureCharacterRanges is giving me heights that are wildly off. The RectangleF I am drawing within corresponds to a visually apparent area of the image, so I can easily see when the text is being decremented more than is necessary. Yet when I pass it some text, the GetBounds call is giving me a height that is almost double what it's actually taking. Using trial and error to set the __minimumSize to force an exit from the while loop, I can see that 24pt text fits within the bounding box, yet Graphics.MeasureCharacterRanges is reporting that the height of that text, once rendered to the image, is 122px (when the bounding box is 64px tall and it fits within that box). Indeed, without forcing the matter, the while loop iterates to 18pt, at which point Graphics.MeasureCharacterRanges returns a value that fits. The trace log excerpt is as follows: Decrementing font size to 24, as size is 193×122 and limit is 212×64 Decrementing font size to 23, as size is 191×117 and limit is 212×64 Decrementing font size to 22, as size is 200×75 and limit is 212×64 Decrementing font size to 21, as size is 192×71 and limit is 212×64 Decrementing font size to 20, as size is 198×68 and limit is 212×64 Decrementing font size to 19, as size is 185×65 and limit is 212×64 Writing VENNEGOOR of HESSELINK in DIN-Black at 18pt, size is 178×61 and limit is 212×64 So why is Graphics.MeasureCharacterRanges giving me a wrong result? I could understand it being, say, the line height of the font if the loop stopped around 21pt (which would visually fit, if I screenshot the results and measure it in Paint.Net), but it's going far further than it should be doing because, frankly, it's returning the wrong damn results. Any and all help gratefully received. Thanks!

    Read the article

  • CodePlex Daily Summary for Monday, July 15, 2013

    CodePlex Daily Summary for Monday, July 15, 2013Popular ReleasesMVC Forum: MVC Forum v1.0: Finally version 1.0 is here! We have been fixing a few bugs, and making sure the release is as stable as possible. We have also changed the way configuration of the application works, mostly how to add your own code or replace some of the standard code with your own. If you download and use our software, please give us some sort of feedback, good or bad!SharePoint 2013 TypeScript Definitions: Release 1.1: TypeScript 0.9 support SharePoint TypeScript Definitions are now compliant with the new version of TypeScript TypeScript generics are now used for defining collection classes (derivatives of SP.ClientCollection object) Improved coverage Added mQuery definitions (m$) Added SPClientAutoFill definitions SP.Utilities namespace is now fully covered SP.UI modal dialog definitions improved CSR definitions improved, added some missing methods and context properties, mostly related to list ...GoAgent GUI: GoAgent GUI ??? 1.0.0: ????GoAgent GUI????,???????????.Net Framework 4.0 ???????: Windows 8 x64 Windows 8 x86 Windows 7 x64 Windows 7 x86 ???????: Windows 8.1 Preview (x86/x64) ???????: Windows XP ????: ????????GoAgent????,????????,?????????????,???????????????????,??????????,????。PiGraph: PiGraph 2.0.8.13: C?p nh?t:Các l?i dã s?a: S?a l?i không nh?p du?c s? âm. L?i tabindex trong giao di?n Thêm hàm Các l?i chua kh?c ph?c: L?i ghi chú nh?p nháy màu. L?i khung ghi chú vu?t ra kh?i biên khi luu file. Luu ý:N?u không kh?i d?ng duoc chuong trình, b?n nên c?p nh?t driver card d? h?a phiên b?n m?i nh?t: AMD Graphics Drivers NVIDIA Driver Xem yêu c?u h? th?ngD3D9Client: D3D9Client R12 for Orbiter Beta: D3D9Client release for orbiter BetaVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Project Server 2013 Event Handler Admin Tool: PSI Event Admin Tool: Download & exract the File. Use LoggerAdmin to upload the event handlers in project server 2013. PSIEventLogger\LoggerAdmin\bin\DebugGherkin editor: Gherkin Editor Beta 2: Fix issue #7 and add some refactoring and code cleanupNew-NuGetPackage PowerShell Script: New-NuGetPackage.ps1 PowerShell Script v1.2: Show nuget gallery to push to when prompting user if they want to push their package.Site Templates By Steve: SharePoint 2010 CORE Site Theme By Steve WSP: Great Site Theme to start with from Steve. See project home page for install instructions. This is a nice centered, mega-menu, fixed width masterpage with styles. Remember to update the mega menu lists.SharePoint Solution Installer: SharePoint Solution Installer V1.2.8: setup2013.exe now supports CompatibilityLevel to target specific hive Use setup.exe for SP2007 & SP2010. Use setup2013.exe for SP2013.TBox - tool to make developer's life easier.: TBox 1.021: 1)Add console unit tests runner, to run unit tests in parallel from console. Also this new sub tool can save valid xml nunit report. It can help you in continuous integration. 2)Fix build scripts.LifeInSharepoint Modern UI Update: Version 2: Some minor improvements, including Audience Targeting support for quick launch links. Also removing all NextDocs references.Virtual Photonics: VTS MATLAB Package 1.0.13 Beta: This is the beta release of the MATLAB package with examples of using the VTS libraries within MATLAB. Changes for this release: Added two new examples to vts_solver_demo.m that: 1) generates and plots R(lambda) at a given rho, and chromophore concentrations assuming a power law for scattering, and 2) solves inverse problem for R(lambda) at given rho. This example solves for concentrations of HbO2, Hb and H20 given simulated measurements created using Nurbs scaled Monte Carlo and inverted u...Advanced Resource Tab for Blend: Advanced Resource Tab: This is the first alpha release of the advanced resource tab for Blend for Visual Studio 2012.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...Outlook 2013 Add-In: Categories and Colors: This new version has a major change in the drawing of the list items: - Using owner drawn code to format the appointments using GDI (some flickering may occur, but it looks a little bit better IMHO, with separate sections). - Added category color support (if more than one category, only one color will be shown). Here, the colors Outlook uses are slightly different than the ones available in System.Drawing, so I did a first approach matching. ;-) - Added appointment status support (to show fr...Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsLINQ to Twitter: LINQ to Twitter v2.1.07: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.DotNetNuke® Community Edition CMS: 06.02.08: Major Highlights Fixed issue where the application throws an Unhandled Error and an HTTP Response Code of 200 when the connection to the database is lost. Security FixesNone Updated Modules/Providers ModulesNone ProvidersNoneNew Projects[.Net Intl] harroc_c;mallar_a;olouso_f: The goal of this project is to create a web crawler and a web front who allows you to search in your index. You will create a mini (or large!) search engine basButterfly Storage: Butterfly Storage is a data access technology based on object-oriented database model for Windows Store applications.KaveCompany: KaveCompleave that girl alone: a team project!MyClrProfiler: This project helps you learn about and develop your own CLR profiler.NETDeob: Deobfuscate obfuscated .NET files easilyProgram stomatologie: SummarySimple Graph Library: Simple portable class library for graphs data structures. .NET, Silverlight 4/5, Windows Phone, Windows RT, Xbox 360T6502 Emulator: T6502 is a 6502 emulator written in TypeScript using AngularJS. The goal is well-organized, readable code over performance.WP8 File Access Webserver: C# HTTP server and web application on Windows Phone 8. Implements file access, browsing and downloading.wpadk: wpadk????wp7?????? ?????????,?????、SDK、wpadk?????????????。??????????????????。??????????????????,????wpadk?????????????????????????????????????。xlmUnit: xlmUnit, Unit Testing

    Read the article

  • Keypress detection wont work after seemingly unrelated code change

    - by LukeZaz
    I'm trying to have the Enter key cause a new 'map' to generate for my game, but for whatever reason after implementing full-screen in it the input check won't work anymore. I tried removing the new code and only pressing one key at a time, but it still won't work. Here's the check code and the method it uses, along with the newMap method: public class Game1 : Microsoft.Xna.Framework.Game { // ... protected override void Update(GameTime gameTime) { // ... // Check if Enter was pressed - if so, generate a new map if (CheckInput(Keys.Enter, 1)) { blocks = newMap(map, blocks, console); } // ... } // Method: Checks if a key is/was pressed public bool CheckInput(Keys key, int checkType) { // Get current keyboard state KeyboardState newState = Keyboard.GetState(); bool retType = false; // Return type if (checkType == 0) { // Check Type: Is key currently down? if (newState.IsKeyDown(key)) { retType = true; } else { retType = false; } } else if (checkType == 1) { // Check Type: Was the key pressed? if (newState.IsKeyDown(key)) { if (!oldState.IsKeyDown(key)) { // Key was just pressed retType = true; } else { // Key was already pressed, return false retType = false; } } } // Save keyboard state oldState = newState; // Return result if (retType == true) { return true; } else { return false; } } // Method: Generate a new map public List<Block> newMap(Map map, List<Block> blockList, Console console) { // Create new map block coordinates List<Vector2> positions = new List<Vector2>(); positions = map.generateMap(console); // Clear list and reallocate memory previously used up by it blockList.Clear(); blockList.TrimExcess(); // Add new blocks to the list using positions created by generateMap() foreach (Vector2 pos in positions) { blockList.Add(new Block() { Position = pos, Texture = dirtTex }); } // Return modified list return blockList; } // ... } and the generateMap code: // Generate a list of Vector2 positions for blocks public List<Vector2> generateMap(Console console, int method = 0) { ScreenTileWidth = gDevice.Viewport.Width / 16; ScreenTileHeight = gDevice.Viewport.Height / 16; maxHeight = gDevice.Viewport.Height; List<Vector2> blockLocations = new List<Vector2>(); if (useScreenSize == true) { Width = ScreenTileWidth; Height = ScreenTileHeight; } else { maxHeight = Height; } int startHeight = -500; // For debugging purposes, the startHeight is set to an // hopefully-unreachable value - if it returns this, something is wrong // Methods of land generation /// <summary> /// Third version land generation /// Generates a base land height as the second version does /// but also generates a 'max change' value which determines how much /// the land can raise or lower by which it now does by a random amount /// during generation /// </summary> if (method == 0) { // Get the land height startHeight = rnd.Next(1, maxHeight); int maxChange = rnd.Next(1, 5); // Amount ground will raise/lower by int curHeight = startHeight; for (int w = 0; w < Width; w++) { // Run a chance to lower/raise ground level int changeBy = rnd.Next(1, maxChange); int doChange = rnd.Next(0, 3); if (doChange == 1 && !(curHeight <= (1 + maxChange))) { curHeight = curHeight - changeBy; } else if (doChange == 2 && !(curHeight >= (29 - maxChange))) { curHeight = curHeight + changeBy; } for (int h = curHeight; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; blockLocations.Add(new Vector2(x, y)); } } console.newMsg("[INFO] Cur, height change maximum: " + maxChange.ToString()); } /// <summary> /// Second version land generator /// Generates a solid mass of land starting at a random height /// derived from either screen height or provided height value /// </summary> else if (method == 1) { // Get the land height startHeight = rnd.Next(0, 30); for (int w = 0; w < Width; w++) { for (int h = startHeight; h < ScreenTileHeight; h++) { // Location variables float x = w * 16; float y = h * 16; // Add a tile at set location blockLocations.Add(new Vector2(x, y)); } } } /// <summary> /// First version land generator /// Generates land completely randomly either across screen or /// in a box set by Width and Height values /// </summary> else { // For each tile in the map... for (int w = 0; w < Width; w++) { for (int h = 0; h < Height; h++) { // Location variables float x = w * 16; float y = h * 16; // ...decide whether or not to place a tile... if (rnd.Next(0, 2) == 1) { // ...and if so, add a tile at that location. blockLocations.Add(new Vector2(x, y)); } } } } console.newMsg("[INFO] Cur, base height: " + startHeight.ToString()); return blockLocations; } I never touched any of the above code for this when it broke - changing keys won't seem to fix it. Despite this, I have camera movement set inside another Game1 method that uses WASD and works perfectly. All I did was add a few lines of code here: private int BackBufferWidth = 1280; // Added these variables private int BackBufferHeight = 800; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = BackBufferWidth; // and this graphics.PreferredBackBufferHeight = BackBufferHeight; // this Content.RootDirectory = "Content"; this.graphics.IsFullScreen = true; // and this } When I try adding a console line to be printed in the event the key is pressed, it seems that the If is never even triggered despite the correct key being pressed.

    Read the article

  • I Clobbered a Leopard with a Window Last Night

    - by D'Arcy Lussier
    I’ve had my 15” Mac Book Pro for a little over a year now, and its hands-down the best laptop I’ve ever owned…hardware wise. And I tried, I really really tried, to like OSX. I even bought Parallels so I could run Windows 7 and all my development tools while still trying to live in an OSX world. But in the end, I missed Windows too much. There were just too many shortcomings with OSX that kept me from being productive. For one thing, Office for Mac is *not* Office for Windows. The applications are written by different teams, and Excel on the Mac is just different enough to be painful. The VM experience was adequate, but my MBP would heat up like crazy when running it and the experience trying to get Windows apps to interact with an OSX file system was awkward. And I found I was in the VM more than I thought I’d be. iMovie is not as easy to use for doing simple movie editing as Windows Movie Maker. There’s no free blog editing software for OSX that’s on par with Windows Live Writer. And really, all I was using OSX for was Twitter (which I can use a Windows client for) and web browsing (also something Windows can provide obviously). So I had to ask myself – why am I forcing myself to use an operating system I don’t like, on a laptop that can support Windows 7? And so I paved my MBP and am happily running Windows 7 on it…and its fantastic! All the good stuff with the hardware is still there with the goodness of Win 7. Happy happy. I did run into some snags doing this though, and that’s really what this blog post is about – things to be aware of if you want to install Win 7 directly on your MBP metal. First, Ensure You Have Your Original Mac Install Disk This was a warning my buddy Dylan, who’s been running Win 7 on his MBP for a while now, gave me early on. The reason you need that original disk is that the hardware drivers you need are all located there. Apparently you can’t easily download them, so make sure you have them ahead of time. Second, Forget BootCamp The only reason you need BootCamp is if you still want the option to boot into OSX. If you don’t, then you don’t need BootCamp. In fact, you don’t even need BootCamp to install Win 7. What you *will* need though is a DVD with Win 7 burnt on it. Apple doesn’t support bootable USB drives. Well, actually they do for Mac Book Airs which don’t come with optical drives…but to get it working you’ll need to edit a system file of BootCamp so your make of MBP is included in an XML document, and even then you *still* are using BootCamp meaning you’ll be making an OSX partition. So don’t worry about BootCamp, just burn a Windows 7 disc, put it into the DVD drive, and restart your MBP. Third, Know The Secret Commands So after putting in the Windows 7 DVD and restarting your MBP, you’ll want to hold down the ‘C’ key during boot up. This tells the MBP that it should boot from the DVD drive instead of the hard drive. Interestingly, it appears you don’t have to do this if its the Mac OSX install disc (more on that in a second), but regardless – hold down C and Windows will start the install process. Next up is the partition process. You’ll notice that there’s a partition called ETI or something like that. This has to do with the drive format that Apple uses and how they partition their system drives. What I did – I blew it away! At first I didn’t, but I was told I couldn’t install Windows on the remaining space due to the different drive format. Blowing away the ETI partition (and all other partitions) allowed me to continue the Windows install. *REMEMBER –  No warranty is provided or implied, just telling you what I did and how I got it to work. Ok, so now Windows is installed and I’m rebooting. Everything looks good, but I need drivers! So I put in the OSX install DVD and run the BootCamp assistant which installs all the Windows drivers I need. Fantastic! Oh, I need to restart – no problem. OH NO, PROBLEM! I left the OSX install DVD in the drive and now the MBP wants to boot from the drive and install OSX! I’m not holding down the C key, what the heck?! Ok, well there must be a way to eject this disk…hmm…no physical button on the side…the eject button doesn’t seem to work on the keyboard…no little pin hole to insert something to force the disc out…well what the…?! It turns out, if you want to eject a disc at boot up, you need (and I kid you not) to plug a mouse into the laptop and hold down the right-click button while its booting. This ejected the disc for me. Seriously. Finally, Things You Should Be Aware Of Once you have Windows up and running there’s a few things you need to be aware of, mainly new keyboard shortcuts. For instance, on the Mac keyboard there is no Home, End, PageUp or PageDown. There’s also no obvious way to do something like select large amounts of text (like you would by holding Shift-Home at the end of a line of text for instance). So here’s some shortcuts you need to know: Home – fn + left arrow End – fn + right arrow Select a line of text as you would with the Home key – Shift + fn + left arrow Select a line of text as you would with the End key – Shift + fn + right arrow Page Up – fn + up arrow Page Down – fn + down arrow Also, you’ll notice that the awesome Mac track pad doesn’t respond to taps as clicks. No fear, this is just a setting that needs to be altered in the BootCamp control panel (that controls the Mac Hardware-specific settings within Windows, you can access it easily from the system tray icon) One other thing, battery life seems a bit lower than with OSX, but then again I’m also doing more than Twitter or web browsing on this thing now. Conclusion My laptop runs awesome now that I have Windows 7 on there. It’s obviously up to individual taste, but for me I just didn’t see benefits to living in an OSX world when everything I needed lived in Windows. And also, I finally am back to an operating system that doesn’t require me to eject a USB drive before physically removing it! It’s 2012 folks, how has this not been fixed?! D

    Read the article

  • OData &ndash; The easiest service I can create: now with updates

    - by Jon Dalberg
    The other day I created a simple NastyWord service exposed via OData. It was read-only and used an in-memory backing store for the words. Today I’ll modify it to use a file instead of a list and I’ll accept new nasty words by implementing IUpdatable directly. The first thing to do is enable the service to accept new entries. This is done at configuration time by adding the “WriteAppend” access rule: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Next I placed a file, NastyWords.txt, in the “App_Data” folder and added a few *choice* words to start. This required one simple change to our NastyWordDataSource.cs file: 1: public NastyWordsDataSource() 2: { 3: UpdateFromSource(); 4: } 5:   6: private void UpdateFromSource() 7: { 8: var words = File.ReadAllLines(pathToFile); 9: NastyWords = (from w in words 10: select new NastyWord { Word = w }).AsQueryable(); 11: }   Nothing too shocking here, just reading each line from the NastyWords.txt file and exposing them. Next, I implemented IUpdatable which comes with a boat-load of methods. We don’t need all of them for now since we are only concerned with allowing new values. Here are the methods we must implement, all the others throw a NotImplementedException: 1: public object CreateResource(string containerName, string fullTypeName) 2: { 3: var nastyWord = new NastyWord(); 4: pendingUpdates.Add(nastyWord); 5: return nastyWord; 6: } 7:   8: public object ResolveResource(object resource) 9: { 10: return resource; 11: } 12:   13: public void SaveChanges() 14: { 15: var intersect = (from w in pendingUpdates 16: select w.Word).Intersect(from n in NastyWords 17: select n.Word); 18:   19: if (intersect.Count() > 0) 20: throw new DataServiceException(500, "duplicate entry"); 21:   22: var lines = from w in pendingUpdates 23: select w.Word; 24:   25: File.AppendAllLines(pathToFile, 26: lines, 27: Encoding.UTF8); 28:   29: pendingUpdates.Clear(); 30:   31: UpdateFromSource(); 32: } 33:   34: public void SetValue(object targetResource, string propertyName, object propertyValue) 35: { 36: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 37: }   I use a simple list to contain the pending updates and only commit them when the “SaveChanges” method is called. Here’s the order these methods are called in our service during an insert: CreateResource – here we just instantiate a new NastyWord and stick a reference to it in our pending updates list. SetValue – this is where the “Word” property of the NastyWord instance is set. SaveChanges – get the list of pending updates, barfing on duplicates, write them to the file and clear our pending list. ResolveResource – the newly created resource will be returned directly here since we aren’t dealing with “handles” to objects but the actual objects themselves. Not too bad, eh? I didn’t find this documented anywhere but a little bit of digging in the OData spec and use of Fiddler made it pretty easy to figure out. Here is some client code which would add a new nasty word: 1: static void Main(string[] args) 2: { 3: var svc = new ServiceReference1.NastyWordsDataSource(new Uri("http://localhost.:60921/NastyWords.svc")); 4: svc.AddToNastyWords(new ServiceReference1.NastyWord() { Word = "shat" }); 5:   6: svc.SaveChanges(); 7: }   Here’s all of the code so far for to implement the service: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Services; 4: using System.Data.Services.Common; 5: using System.Linq; 6: using System.ServiceModel.Web; 7: using System.Web; 8: using System.IO; 9: using System.Text; 10:   11: namespace ONasty 12: { 13: [DataServiceKey("Word")] 14: public class NastyWord 15: { 16: public string Word { get; set; } 17: } 18:   19: public class NastyWordsDataSource : IUpdatable 20: { 21: private List<NastyWord> pendingUpdates = new List<NastyWord>(); 22: private string pathToFile = @"path to your\App_Data\NastyWords.txt"; 23:   24: public NastyWordsDataSource() 25: { 26: UpdateFromSource(); 27: } 28:   29: private void UpdateFromSource() 30: { 31: var words = File.ReadAllLines(pathToFile); 32: NastyWords = (from w in words 33: select new NastyWord { Word = w }).AsQueryable(); 34: } 35:   36: public IQueryable<NastyWord> NastyWords { get; private set; } 37:   38: public void AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded) 39: { 40: throw new NotImplementedException(); 41: } 42:   43: public void ClearChanges() 44: { 45: pendingUpdates.Clear(); 46: } 47:   48: public object CreateResource(string containerName, string fullTypeName) 49: { 50: var nastyWord = new NastyWord(); 51: pendingUpdates.Add(nastyWord); 52: return nastyWord; 53: } 54:   55: public void DeleteResource(object targetResource) 56: { 57: throw new NotImplementedException(); 58: } 59:   60: public object GetResource(IQueryable query, string fullTypeName) 61: { 62: throw new NotImplementedException(); 63: } 64:   65: public object GetValue(object targetResource, string propertyName) 66: { 67: throw new NotImplementedException(); 68: } 69:   70: public void RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved) 71: { 72: throw new NotImplementedException(); 73: } 74:   75: public object ResetResource(object resource) 76: { 77: throw new NotImplementedException(); 78: } 79:   80: public object ResolveResource(object resource) 81: { 82: return resource; 83: } 84:   85: public void SaveChanges() 86: { 87: var intersect = (from w in pendingUpdates 88: select w.Word).Intersect(from n in NastyWords 89: select n.Word); 90:   91: if (intersect.Count() > 0) 92: throw new DataServiceException(500, "duplicate entry"); 93:   94: var lines = from w in pendingUpdates 95: select w.Word; 96:   97: File.AppendAllLines(pathToFile, 98: lines, 99: Encoding.UTF8); 100:   101: pendingUpdates.Clear(); 102:   103: UpdateFromSource(); 104: } 105:   106: public void SetReference(object targetResource, string propertyName, object propertyValue) 107: { 108: throw new NotImplementedException(); 109: } 110:   111: public void SetValue(object targetResource, string propertyName, object propertyValue) 112: { 113: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 114: } 115: } 116:   117: public class NastyWords : DataService<NastyWordsDataSource> 118: { 119: // This method is called only once to initialize service-wide policies. 120: public static void InitializeService(DataServiceConfiguration config) 121: { 122: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 123: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 124: } 125: } 126: } Next time we’ll allow removing nasty words. Enjoy!

    Read the article

  • SharePoint logging to a list

    - by Norgean
    I recently worked in an environment with several servers. Locating the correct SharePoint log file for error messages, or development trace calls, is cumbersome. And once the solution hit the cloud, it got even worse, as we had no access to the log files at all. Obviously we are not the only ones with this problem, and the current trend seems to be to log to a list. This had become an off-hour project, so rather than do the sensible thing and find a ready-made solution, I decided to do it the hard way. So! Fire up Visual Studio, create yet another empty SharePoint solution, and start to think of some requirements. Easy on/offI want to be able to turn list-logging on and off.Easy loggingFor me, this means being able to use string.Format.Easy filteringLet's have the possibility to add some filtering columns; category and severity, where severity can be "verbose", "warning" or "error". Easy on/off Well, that's easy. Create a new web feature. Add an event receiver, and create the list on activation of the feature. Tear the list down on de-activation. I chose not to create a new content type; I did not feel that it would give me anything extra. I based the list on the generic list - I think a better choice would have been the announcement type. Approximately: public void CreateLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListName);             if (list == null)             {                 var listGuid = web.Lists.Add(LogListName, "Logging for the masses", SPListTemplateType.GenericList);                 list = web.Lists[listGuid];                 list.Title = LogListTitle;                 list.Update();                 list.Fields.Add(Category, SPFieldType.Text, false);                 var stringColl = new StringCollection();                 stringColl.AddRange(new[]{Error, Information, Verbose});                 list.Fields.Add(Severity, SPFieldType.Choice, true, false, stringColl);                 ModifyDefaultView(list);             }         }Should be self explanatory, but: only create the list if it does not already exist (d'oh). Best practice: create it with a Url-friendly name, and, if necessary, give it a better title. ...because otherwise you'll have to look for a list with a name like "Simple_x0020_Log". I've added a couple of fields; a field for category, and a 'severity'. Both to make it easier to find relevant log messages. Notice that I don't have to call list.Update() after adding the fields - this would cause a nasty error (something along the lines of "List locked by another user"). The function for deleting the log is exactly as onerous as you'd expect:         public void DeleteLog(SPWeb web)         {             var list = web.Lists.TryGetList(LogListTitle);             if (list != null)             {                 list.Delete();             }         } So! "All" that remains is to log. Also known as adding items to a list. Lots of different methods with different signatures end up calling the same function. For example, LogVerbose(web, message) calls LogVerbose(web, null, message) which again calls another method which calls: private static void Log(SPWeb web, string category, string severity, string textformat, params object[] texts)         {             if (web != null)             {                 var list = web.Lists.TryGetList(LogListTitle);                 if (list != null)                 {                     var item = list.AddItem(); // NOTE! NOT list.Items.Add… just don't, mkay?                     var text = string.Format(textformat, texts);                     if (text.Length > 255) // because the title field only holds so many chars. Sigh.                         text = text.Substring(0, 254);                     item[SPBuiltInFieldId.Title] = text;                     item[Degree] = severity;                     item[Category] = category;                     item.Update();                 }             } // omitted: Also log to SharePoint log.         } By adding a params parameter I can call it as if I was doing a Console.WriteLine: LogVerbose(web, "demo", "{0} {1}{2}", "hello", "world", '!'); Ok, that was a silly example, a better one might be: LogError(web, LogCategory, "Exception caught when updating {0}. exception: {1}", listItem.Title, ex); For performance reasons I use list.AddItem rather than list.Items.Add. For completeness' sake, let us include the "ModifyDefaultView" function that I deliberately skipped earlier.         private void ModifyDefaultView(SPList list)         {             // Add fields to default view             var defaultView = list.DefaultView;             var exists = defaultView.ViewFields.Cast<string>().Any(field => String.CompareOrdinal(field, Severity) == 0);               if (!exists)             {                 var field = list.Fields.GetFieldByInternalName(Severity);                 if (field != null)                     defaultView.ViewFields.Add(field);                 field = list.Fields.GetFieldByInternalName(Category);                 if (field != null)                     defaultView.ViewFields.Add(field);                 defaultView.Update();                   var sortDoc = new XmlDocument();                 sortDoc.LoadXml(string.Format("<Query>{0}</Query>", defaultView.Query));                 var orderBy = (XmlElement) sortDoc.SelectSingleNode("//OrderBy");                 if (orderBy != null && sortDoc.DocumentElement != null)                     sortDoc.DocumentElement.RemoveChild(orderBy);                 orderBy = sortDoc.CreateElement("OrderBy");                 sortDoc.DocumentElement.AppendChild(orderBy);                 field = list.Fields[SPBuiltInFieldId.Modified];                 var fieldRef = sortDoc.CreateElement("FieldRef");                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                   fieldRef = sortDoc.CreateElement("FieldRef");                 field = list.Fields[SPBuiltInFieldId.ID];                 fieldRef.SetAttribute("Name", field.InternalName);                 fieldRef.SetAttribute("Ascending", "FALSE");                 orderBy.AppendChild(fieldRef);                 defaultView.Query = sortDoc.DocumentElement.InnerXml;                 //defaultView.Query = "<OrderBy><FieldRef Name='Modified' Ascending='FALSE' /><FieldRef Name='ID' Ascending='FALSE' /></OrderBy>";                 defaultView.Update();             }         } First two lines are easy - see if the default view includes the "Severity" column. If it does - quit; our job here is done.Adding "severity" and "Category" to the view is not exactly rocket science. But then? Then we build the sort order query. Through XML. The lines are numerous, but boring. All to achieve the CAML query which is commented out. The major benefit of using the dom to build XML, is that you may get compile time errors for spelling mistakes. I say 'may', because although the compiler will not let you forget to close a tag, it will cheerfully let you spell "Name" as "Naem". Whichever you prefer, at the end of the day the view will sort by modified date and ID, both descending. I added the ID as there may be several items with the same time stamp. So! Simple logging to a list, with sensible a view, and with normal functionality for creating your own filterings. I should probably have added some more views in code, ready filtered for "only errors", "errors and warnings" etc. And it would be nice to block verbose logging completely, but I'm not happy with the alternatives. (yetanotherfeature or an admin page seem like overkill - perhaps just removing it as one of the choices, and not log if it isn't there?) Before you comment - yes, try-catches have been removed for clarity. There is nothing worse than having a logging function that breaks your site!

    Read the article

  • Does my basic PHP Socket Server need optimization?

    - by Tom
    Like many people, I can do a lot of things with PHP. One problem I do face constantly is that other people can do it much cleaner, much more organized and much more structured. This also results in much faster execution times and much less bugs. I just finished writing a basic PHP Socket Server (the real core), and am asking you if you can tell me what I should do different before I start expanding the core. I'm not asking about improvements such as encrypted data, authentication or multi-threading. I'm more wondering about questions like "should I maybe do it in a more object oriented way (using PHP5)?", or "is the general structure of the way the script works good, or should some things be done different?". Basically, "is this how the core of a socket server should work?" In fact, I think that if I just show you the code here many of you will immediately see room for improvements. Please be so kind to tell me. Thanks! #!/usr/bin/php -q <? // config $timelimit = 180; // amount of seconds the server should run for, 0 = run indefintely $address = $_SERVER['SERVER_ADDR']; // the server's external IP $port = 9000; // the port to listen on $backlog = SOMAXCONN; // the maximum of backlog incoming connections that will be queued for processing // configure custom PHP settings error_reporting(1); // report all errors ini_set('display_errors', 1); // display all errors set_time_limit($timelimit); // timeout after x seconds ob_implicit_flush(); // results in a flush operation after every output call //create master IPv4 based TCP socket if (!($master = socket_create(AF_INET, SOCK_STREAM, SOL_TCP))) die("Could not create master socket, error: ".socket_strerror(socket_last_error())); // set socket options (local addresses can be reused) if (!socket_set_option($master, SOL_SOCKET, SO_REUSEADDR, 1)) die("Could not set socket options, error: ".socket_strerror(socket_last_error())); // bind to socket server if (!socket_bind($master, $address, $port)) die("Could not bind to socket server, error: ".socket_strerror(socket_last_error())); // start listening if (!socket_listen($master, $backlog)) die("Could not start listening to socket, error: ".socket_strerror(socket_last_error())); //display startup information echo "[".date('Y-m-d H:i:s')."] SERVER CREATED (MAXCONN: ".SOMAXCONN.").\n"; //max connections is a kernel variable and can be adjusted with sysctl echo "[".date('Y-m-d H:i:s')."] Listening on ".$address.":".$port.".\n"; $time = time(); //set startup timestamp // init read sockets array $read_sockets = array($master); // continuously handle incoming socket messages, or close if time limit has been reached while ((!$timelimit) or (time() - $time < $timelimit)) { $changed_sockets = $read_sockets; socket_select($changed_sockets, $write = null, $except = null, null); foreach($changed_sockets as $socket) { if ($socket == $master) { if (($client = socket_accept($master)) < 0) { echo "[".date('Y-m-d H:i:s')."] Socket_accept() failed, error: ".socket_strerror(socket_last_error())."\n"; continue; } else { array_push($read_sockets, $client); echo "[".date('Y-m-d H:i:s')."] Client #".count($read_sockets)." connected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } } else { $data = @socket_read($socket, 1024, PHP_NORMAL_READ); //read a maximum of 1024 bytes until a new line has been sent if ($data === false) { //the client disconnected $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; } else { if ($data = trim($data)) { //remove whitespace and continue only if the message is not empty switch ($data) { case "exit": //close connection when exit command is given $index = array_search($socket, $read_sockets); unset($read_sockets[$index]); socket_close($socket); echo "[".date('Y-m-d H:i:s')."] Client #".($index-1)." disconnected (connections: ".count($read_sockets)."/".SOMAXCONN.")\n"; break; default: //for experimental purposes, write the given data back socket_write($socket, "\n you wrote: ".$data); } } } } } } socket_close($master); //close the socket echo "[".date('Y-m-d H:i:s')."] SERVER CLOSED.\n"; ?>

    Read the article

  • Likewise DomainJoin hangs on Finishing krb5.conf configuration

    - by dreay
    Hello, I have a problem when joining a CentOS release 5.4 (Final) x64 machine to the domain after running domainjoin-cli --loglevel info --log . join domain.local password I obtain the following, which seems to hang on "20100428112821:INFO:Finishing krb5.conf configuration" 20100428112817:INFO:Domainjoin invoked with the join command (remaining arguments will be printed later): 20100428112817:INFO: [/opt/likewise/bin/domainjoin-cli] 20100428112817:INFO: [--loglevel] 20100428112817:INFO: [info] 20100428112817:INFO: [--log] 20100428112817:INFO: [/tmp/join_1.log] 20100428112817:INFO: [join] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwsmd] 20100428112817:INFO:Daemon [/etc/init.d/lwsmd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwsmd] 20100428112817:INFO:Daemon [/etc/init.d/lwsmd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwregd] 20100428112817:INFO:Daemon [/etc/init.d/lwregd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwregd] 20100428112817:INFO:Daemon [/etc/init.d/lwregd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/netlogond] 20100428112817:INFO:Daemon [/etc/init.d/netlogond]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/netlogond] 20100428112817:INFO:Daemon [/etc/init.d/netlogond]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwiod] 20100428112817:INFO:Daemon [/etc/init.d/lwiod]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lwiod] 20100428112817:INFO:Daemon [/etc/init.d/lwiod]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/dcerpcd] 20100428112817:INFO:Daemon [/etc/init.d/dcerpcd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/dcerpcd] 20100428112817:INFO:Daemon [/etc/init.d/dcerpcd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/eventlogd] 20100428112817:INFO:Daemon [/etc/init.d/eventlogd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/eventlogd] 20100428112817:INFO:Daemon [/etc/init.d/eventlogd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lsassd] 20100428112817:INFO:Daemon [/etc/init.d/lsassd]: status [0] 20100428112817:INFO:Checking status of daemon [/etc/init.d/lsassd] 20100428112817:INFO:Daemon [/etc/init.d/lsassd]: status [0] 20100428112817:INFO:Domainjoin invoked with 2 arg(s) to the join command: 20100428112817:INFO: [domain.local] 20100428112817:INFO: [default.user] 20100428112817:INFO:Adding ops (fqdn ops.domain.local) to /etc/hosts ip 192.168.246.5, removing ops, ops.domain.local, ops, ops.domain.local 20100428112817:INFO:Reading krb5 file /tmp/likewisetmpPkpAn5/etc/krb5.conf 20100428112817:INFO:Reading krb5 file /tmp/likewisetmpb6dkNX/etc/krb5.conf 20100428112817:INFO:Reading nsswitch file /etc/nsswitch.conf 20100428112817:INFO:Reading pam configuration 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/config-util.rpmnew 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/config-util 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/runuser-l 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/sshd 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/other 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/smtp.postfix 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/su-l 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-switch-mail-nox 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/kshell 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/authconfig 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/ekshell 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/run_init 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/screen 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/eject 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-auth.rpmnew 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-config-network-cmd 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-auth-ac 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/kbdrate 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/smtp.sendmail 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/chsh 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/setup 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-switch-mail 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/ksu 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/login 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/sudo-i 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/smtp 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/runuser 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/chfn 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/ppp 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/gssftp 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/remote 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/reboot 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/newrole 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/pm-powersave 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-auth 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/halt 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/other.rpmnew 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/atd 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/passwd 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/authconfig-tui 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/pm-hibernate 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/su 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/system-config-network 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/neat 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/pm-suspend-hybrid 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/crond 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/sudo 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/pm-suspend 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.d/poweroff 20100428112817:INFO:Reading pam file /tmp/likewisetmptrO2dQ/etc/pam.conf 20100428112817:INFO:File /tmp/likewisetmptrO2dQ/etc/pam.conf does not exist 20100428112817:INFO:Found config file /etc/ssh/sshd_config 20100428112817:INFO:Found binary /usr/sbin/sshd 20100428112817:INFO:Reading ssh file /etc/ssh/sshd_config 20100428112817:INFO:Found open sshd version 4.3.-1p2 20100428112817:INFO:Testing option ChallengeResponseAuthentication 20100428112817:INFO:Option ChallengeResponseAuthentication supported 20100428112817:INFO:Testing option UsePAM 20100428112817:INFO:Option UsePAM supported 20100428112817:INFO:Testing option PAMAuthenticationViaKBDInt 20100428112817:INFO:Option PAMAuthenticationViaKBDInt not supported 20100428112817:INFO:Testing option KbdInteractiveAuthentication 20100428112817:INFO:Option KbdInteractiveAuthentication supported 20100428112817:INFO:Testing option GSSAPIAuthentication 20100428112817:INFO:Option GSSAPIAuthentication supported 20100428112817:INFO:Testing option GSSAPICleanupCredentials 20100428112817:INFO:Option GSSAPICleanupCredentials supported 20100428112817:INFO:Found config file /etc/ssh/ssh_config 20100428112817:INFO:Found binary /usr/bin/ssh 20100428112817:INFO:Reading ssh file /etc/ssh/ssh_config 20100428112817:INFO:Testing option GSSAPIAuthentication 20100428112817:INFO:Option GSSAPIAuthentication supported 20100428112817:INFO:Testing option GSSAPIDelegateCredentials 20100428112817:INFO:Option GSSAPIDelegateCredentials supported 20100428112821:INFO:Running module join 20100428112821:INFO:Starting krb5.conf configuration (enabling) 20100428112821:INFO:Reading krb5 file /tmp/likewisetmpvgqQmT/etc/krb5.conf 20100428112821:WARNING:Short domain name not specified. Defaulting to 'betgenius' 20100428112821:INFO:Failed to run lwinet ads trusts. This is expected if not yet joined to the domain 20100428112821:INFO:Failed to run lwiinfo --details -m. This is expected if the auth daemon is not running 20100428112821:INFO:Writing krb5 file /tmp/likewisetmpvgqQmT/etc/krb5.conf 20100428112821:INFO:File /tmp/likewisetmpvgqQmT/etc/krb5.conf modified 20100428112821:INFO:Finishing krb5.conf configuration Has anyone seen this error before? and know of the fix?

    Read the article

  • Error when reloading supervisord: unix:///tmp/supervisor.sock no such file

    - by Yarin
    I'm running supervisord on my CentOS 6 box like so, /usr/bin/supervisord -c /etc/supervisord.conf and when I launch supervisorctl all process status are fine, but if I try to reload using supervisorctl I get unix:///tmp/supervisor.sock no such file I'm using the same config file I've used successfully on other boxes, and im running everything as root. I can't undesrtand what the problem is... Config file: ; Sample supervisor config file. [unix_http_server] file=/tmp/supervisor.sock ; (the path to the socket file) ;chmod=0700 ; socket file mode (default 0700) ;chown=nobody:nogroup ; socket file uid:gid owner ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) ;[inet_http_server] ; inet (TCP) server disabled by default ;port=127.0.0.1:9001 ; (ip_address:port specifier, *:port for all iface) ;username=user ; (default is no username (open server)) ;password=123 ; (default is no password (open server)) [supervisord] logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log) logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) logfile_backups=10 ; (num of main logfile rotation backups;default 10) loglevel=info ; (log level;default info; others: debug,warn,trace) pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) ;umask=022 ; (process file creation umask;default 022) ;user=chrism ; (default is current user, required if root) ;identifier=supervisor ; (supervisord identifier, default is 'supervisor') ;directory=/tmp ; (default is not to cd during start) ;nocleanup=true ; (don't clean up tempfiles at start;default false) ;childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP) ;environment=KEY=value ; (key value pairs to add to environment) ;strip_ansi=false ; (strip ansi escape codes in logs; def. false) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be ; added by defining them in separate rpcinterface: sections [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket ;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket ;username=chris ; should be same as http_username if set ;password=123 ; should be same as http_password if set ;prompt=mysupervisor ; cmd line prompt (default "supervisor") ;history_file=~/.sc_history ; use readline history if available ; The below sample program section shows all possible program subsection values, ; create one or more 'real' program: sections to be able to control them under ; supervisor. ;[program:foo] ;command=/bin/cat [program:embed_scheduler] command=/opt/web-apps/mywebsite/custom_process.py process_name=%(program_name)s_%(process_num)d numprocs=3 ;[program:theprogramname] ;command=/bin/cat ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=999 ; the relative start priority (default 999) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups=10 ; # of stderr logfile backups (default 10) ;stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions (def no adds) ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample eventlistener section shows all possible ; eventlistener subsection values, create one or more 'real' ; eventlistener: sections to be able to handle event notifications ; sent by supervisor. ;[eventlistener:theeventlistenername] ;command=/bin/eventlistener ; the program (relative uses PATH, can take args) ;process_name=%(program_name)s ; process_name expr (default %(program_name)s) ;numprocs=1 ; number of processes copies to start (def 1) ;events=EVENT ; event notif. types to subscribe to (req'd) ;buffer_size=10 ; event buffer queue size (default 10) ;directory=/tmp ; directory to cwd to before exec (def no cwd) ;umask=022 ; umask for process (default None) ;priority=-1 ; the relative start priority (default -1) ;autostart=true ; start at supervisord start (default: true) ;autorestart=unexpected ; whether/when to restart (default: unexpected) ;startsecs=1 ; number of secs prog must stay running (def. 1) ;startretries=3 ; max # of serial start failures (default 3) ;exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) ;stopsignal=QUIT ; signal used to kill process (default TERM) ;stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) ;killasgroup=false ; SIGKILL the UNIX process group (def false) ;user=chrism ; setuid to this UNIX account to run the program ;redirect_stderr=true ; redirect proc stderr to stdout (default false) ;stdout_logfile=/a/path ; stdout log path, NONE for none; default AUTO ;stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stdout_logfile_backups=10 ; # of stdout logfile backups (default 10) ;stdout_events_enabled=false ; emit events on stdout writes (default false) ;stderr_logfile=/a/path ; stderr log path, NONE for none; default AUTO ;stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB) ;stderr_logfile_backups ; # of stderr logfile backups (default 10) ;stderr_events_enabled=false ; emit events on stderr writes (default false) ;environment=A=1,B=2 ; process environment additions ;serverurl=AUTO ; override serverurl computation (childutils) ; The below sample group section shows all possible group values, ; create one or more 'real' group: sections to create "heterogeneous" ; process groups. ;[group:thegroupname] ;programs=progname1,progname2 ; each refers to 'x' in [program:x] definitions ;priority=999 ; the relative start priority (default 999) ; The [include] section can just contain the "files" setting. This ; setting can list multiple files (separated by whitespace or ; newlines). It can also contain wildcards. The filenames are ; interpreted as relative to this file. Included files *cannot* ; include files themselves. ;[include] ;files = relative/directory/*.ini

    Read the article

  • Making Thunar the default file browser without hiding the desktop icons

    - by Manu
    I really dislike Ubuntu's default file browser, nautilus, and decided to opt for a lighter alternative (Thunar or Xfe). I've used the following script to change the default to Thunar, but now all my icons are gone from the desktop ! The files are still there, in /home/myid/Desktop, but they do not appear. Is there a way to show them, or is this a consequence of removing nautilus as the default file browser ? Can I modify the following script* in order to keep the icons ? *copied from https://help.ubuntu.com/...: ## Originally written by aysiu from the Ubuntu Forums ## This is GPL'ed code ## So improve it and re-release it ## Define portion to make Thunar the default if that appears to be the appropriate action makethunardefault() { ## I went with --no-install-recommends because ## I didn't want to bring in a whole lot of junk, ## and Jaunty installs recommended packages by default. echo -e "\nMaking sure Thunar is installed\n" sudo apt-get update && sudo apt-get install thunar --no-install-recommends ## Does it make sense to change to the directory? ## Or should all the individual commands just reference the full path? echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nMaking backup directory\n" ## Does it make sense to create an entire backup directory? ## Should each file just be backed up in place? sudo mkdir nonautilusplease echo -e "\nModifying folder handler launcher\n" sudo cp nautilus-folder-handler.desktop nonautilusplease/ ## Here I'm using two separate sed commands ## Is there a way to string them together to have one ## sed command make two replacements in a single file? sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-folder-handler.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-folder-handler.desktop echo -e "\nModifying browser launcher\n" sudo cp nautilus-browser.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop --browser/thunar/g' nautilus-browser.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-browser.desktop echo -e "\nModifying computer icon launcher\n" sudo cp nautilus-computer.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-computer.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-computer.desktop echo -e "\nModifying home icon launcher\n" sudo cp nautilus-home.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-home.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-home.desktop echo -e "\nModifying general Nautilus launcher\n" sudo cp nautilus.desktop nonautilusplease/ sudo sed -i -n 's/Exec=nautilus/Exec=thunar/g' nautilus.desktop ## This last bit I'm not sure should be included ## See, the only thing that doesn't change to the ## new Thunar default is clicking the files on the desktop, ## because Nautilus is managing the desktop (so technically ## it's not launching a new process when you double-click ## an icon there). ## So this kills the desktop management of icons completely ## Making the desktop pretty useless... would it be better ## to keep Nautilus there instead of nothing? Or go so far ## as to have Xfce manage the desktop in Gnome? echo -e "\nChanging base Nautilus launcher\n" sudo dpkg-divert --divert /usr/bin/nautilus.old --rename /usr/bin/nautilus && sudo ln -s /usr/bin/thunar /usr/bin/nautilus echo -e "\nRemoving Nautilus as desktop manager\n" killall nautilus echo -e "\nThunar is now the default file manager. To return Nautilus to the default, run this script again.\n" } restorenautilusdefault() { echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nRestoring backup files\n" sudo cp nonautilusplease/nautilus-folder-handler.desktop . sudo cp nonautilusplease/nautilus-browser.desktop . sudo cp nonautilusplease/nautilus-computer.desktop . sudo cp nonautilusplease/nautilus-home.desktop . sudo cp nonautilusplease/nautilus.desktop . echo -e "\nRemoving backup folder\n" sudo rm -r nonautilusplease echo -e "\nRestoring Nautilus launcher\n" sudo rm /usr/bin/nautilus && sudo dpkg-divert --rename --remove /usr/bin/nautilus echo -e "\nMaking Nautilus manage the desktop again\n" nautilus --no-default-window & ## The only change that isn't undone is the installation of Thunar ## Should Thunar be removed? Or just kept in? ## Don't want to load the script with too many questions? } ## Make sure that we exit if any commands do not complete successfully. ## Thanks to nanotube for this little snippet of code from the early ## versions of UbuntuZilla set -o errexit trap 'echo "Previous command did not complete successfully. Exiting."' ERR ## This is the main code ## Is it necessary to put an elseif in here? Or is ## redundant, since the directory pretty much ## either exists or it doesn't? ## Is there a better way to keep track of whether ## the script has been run before? if [[ -e /usr/share/applications/nonautilusplease ]]; then restorenautilusdefault else makethunardefault fi;

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • How to install MariaDB rpms in CentOS 6.4 using rpm (not yum cmd) + handling mysql-libs conflicts

    - by Pat C
    I need to script the install of MariaDB using the rpm command in CentOS 6.4. I can't use yum since it's going to be an offline install so there's no access to the repository. The only MySQL package installed is mysql-libs as various other packages in CentOS depend on it. When I did a test install of MariaDB with yum it correctly accounted for mysql-libs and uninstalled it at the end as MariaDB could handle the dependencies after it was installed: [root@new-host-6 ~]# yum install MariaDB-client MariaDB-common MariaDB-compat MariaDB-devel MariaDB-server MariaDB-shared Loaded plugins: downloadonly, fastestmirror, refresh-packagekit, security, verify Loading mirror speeds from cached hostfile * base: mirrors.kernel.org * extras: mirror.keystealth.org * updates: mirror.umd.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package MariaDB-client.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-common.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-compat.x86_64 0:5.5.32-1 will be obsoleting ---> Package MariaDB-devel.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-server.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-shared.x86_64 0:5.5.32-1 will be obsoleting ---> Package mysql-libs.x86_64 0:5.1.66-2.el6_3 will be obsoleted --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Installing: MariaDB-client x86_64 5.5.32-1 mariadb 10 M MariaDB-common x86_64 5.5.32-1 mariadb 23 k MariaDB-compat x86_64 5.5.32-1 mariadb 2.7 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 MariaDB-devel x86_64 5.5.32-1 mariadb 5.6 M MariaDB-server x86_64 5.5.32-1 mariadb 34 M MariaDB-shared x86_64 5.5.32-1 mariadb 1.1 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 Transaction Summary ==================================================================================================================================================================== Install 6 Package(s) Total download size: 53 M Is this ok [y/N]: y Downloading Packages: (1/6): MariaDB-5.5.32-centos6-x86_64-client.rpm | 10 MB 00:06 (2/6): MariaDB-5.5.32-centos6-x86_64-common.rpm | 23 kB 00:00 (3/6): MariaDB-5.5.32-centos6-x86_64-compat.rpm | 2.7 MB 00:02 (4/6): MariaDB-5.5.32-centos6-x86_64-devel.rpm | 5.6 MB 00:06 (5/6): MariaDB-5.5.32-centos6-x86_64-server.rpm | 34 MB 00:23 (6/6): MariaDB-5.5.32-centos6-x86_64-shared.rpm | 1.1 MB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.3 MB/s | 53 MB 00:40 warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY Retrieving key from https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Importing GPG key 0x1BB943DB: Userid: "Daniel Bartholomew (Monty Program signing key) <[email protected]>" From : https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Warning: RPMDB altered outside of yum. Installing : MariaDB-compat-5.5.32-1.x86_64 1/7 Installing : MariaDB-common-5.5.32-1.x86_64 2/7 Installing : MariaDB-server-5.5.32-1.x86_64 3/7 chown: cannot access `/var/lib/mysql': No such file or directory PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! To do so, start the server, then issue the following commands: '/usr/bin/mysqladmin' -u root password 'new-password' '/usr/bin/mysqladmin' -u root -h new-host-6 password 'new-password' Alternatively you can run: '/usr/bin/mysql_secure_installation' which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the MariaDB Knowledgebase at http://kb.askmonty.org or the MySQL manual for more instructions. Please report any problems with the '/usr/bin/mysqlbug' script! The latest information about MariaDB is available at http://mariadb.org/. You can find additional information about the MySQL part at: http://dev.mysql.com Support MariaDB development by buying support/new features from Monty Program Ab. You can contact us about this at [email protected]. Alternatively consider joining our community based development effort: http://kb.askmonty.org/en/contributing-to-the-mariadb-project/ Installing : MariaDB-devel-5.5.32-1.x86_64 4/7 Installing : MariaDB-client-5.5.32-1.x86_64 5/7 Installing : MariaDB-shared-5.5.32-1.x86_64 6/7 Erasing : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Verifying : MariaDB-common-5.5.32-1.x86_64 1/7 Verifying : MariaDB-server-5.5.32-1.x86_64 2/7 Verifying : MariaDB-devel-5.5.32-1.x86_64 3/7 Verifying : MariaDB-client-5.5.32-1.x86_64 4/7 Verifying : MariaDB-compat-5.5.32-1.x86_64 5/7 Verifying : MariaDB-shared-5.5.32-1.x86_64 6/7 Verifying : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Installed: MariaDB-client.x86_64 0:5.5.32-1 MariaDB-common.x86_64 0:5.5.32-1 MariaDB-compat.x86_64 0:5.5.32-1 MariaDB-devel.x86_64 0:5.5.32-1 MariaDB-server.x86_64 0:5.5.32-1 MariaDB-shared.x86_64 0:5.5.32-1 Replaced: mysql-libs.x86_64 0:5.1.66-2.el6_3 Complete! My question is, what is the equivalent way to install the MariaDB packages using the rpm command only as opposed to yum? If I do rpm -ivh MariaDB*.rpm, I will get a ton of messages like the following about conflicts with mysql-libs: file /etc/my.cnf from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 I then used the --force option to install the MariaDB rpms and uninstalled mysql-lib, I didn't get any weird messages but I'm not sure that is the cleanest method to handle the conflicts and do the install. So can someone confirm that installing MariaDB with the following rpm commands would be the same as using yum to install the packages and handle mysql-libs conflicts/removal: rpm -ivh --force MariaDB*.rpm rpm -e mysql-libs Thanks for any input!

    Read the article

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Mac won't boot into safe mode

    - by Stephen
    Mac boots fine normally, except when in safe mode. Holding down shift when booting gets me to the progress bar on the grey screen. Progress bar gets about half way before mac reboots. I modified nvram boot-args to get a better look: sudo nvram boot-args="-x -v" It definitely gets through fsck, skips loading kernel extensions (since it's in safe mode), does something with the network interfaces, then this is the last thing it wips through... Aug 22 11:56:21 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'com.apple.ServiceManagement.daemons.modify' by client '/usr/libexec/UserEventAgent' [10] for authorization created by '/usr/libexec/UserEventAgent' [10] (100012,0) Aug 22 11:56:22 Crockpot fseventsd[37]: event logs in /.fseventsd out of sync with volume. destroying old logs. (1 174 330) Aug 22 11:56:22 Crockpot fseventsd[37]: log dir: /.fseventsd getting new uuid: 5C379650-26FA-428F-B81F-4FE4349D50B3 Aug 22 11:56:23 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) starting OSXVers 12 Aug 22 11:56:23 Crockpot systemkeychain[35]: done file: /var/run/systemkeychaincheck.done Aug 22 11:56:23 Crockpot configd[17]: network changed: DNS* Aug 22 11:56:24 --- last message repeated 1 time --- Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2D_IPC: Loaded Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2DInitialize succeeded Aug 22 11:56:24 Crockpot mDNSResponder[39]: Adding registration domain 273025955.members.btmm.icloud.com. Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe MAC AUTH succeeded Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe Unsolicited Auth Aug 22 11:56:24 Crockpot kernel[0]: wlEvent: en1 en1 Link UP virtIf = 0 Aug 22 11:56:24 Crockpot kernel[0]: AirPort: Link Up on en1 Aug 22 11:56:24 Crockpot kernel[0]: en1: BSSID changed to 00:23:69:35:dc:fe Aug 22 11:56:24 Crockpot kernel[0]: en1::IO80211Interface::postMessage bssid changed Aug 22 11:56:24 Crockpot kernel[0]: AirPort: RSN handshake complete on en1 Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences failed to read preferences data. Errno was 21 Aug 22 11:56:25 --- last message repeated 1 time --- Aug 22 11:56:25 Crockpot airportd[30]: _doAutoJoin: Already associated to “burnum”. Bailing on auto-join. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Can't load IOBluetoothSerialManager.kext - ineligible during safe boot. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Load com.apple.iokit.IOBluetoothSerialManager failed; removing personalities from kernel. Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences: error renaming file blued.plist.HXuEmQn to blued.plist. Aug 22 11:56:27 Crockpot awacsd[52]: Starting awacsd connectivity-77 (Jun 20 2012 15:40:49) Aug 22 11:56:27 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'system.services.systemconfiguration.network' by client '/System/Library/Frameworks/SystemConfiguration.framework/Versions/A/Resources/SCHelper' [54] for authorization created by '/usr/sbin/awacsd' [52] (100003,0) Aug 22 11:56:27 --- last message repeated 1 time --- Aug 22 11:56:27 Crockpot awacsd[52]: Configuring lazy AWACS client: 273025955.p04.members.btmm.icloud.com. Aug 22 11:56:28 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:32 --- last message repeated 1 time --- Aug 22 11:56:32 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:38 --- last message repeated 1 time --- Aug 22 11:56:38 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:47 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:49 Crockpot configd[17]: subnet_route: write routing socket failed, Network is unreachable Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1+:169.254.80.161) DNS* Proxy+ SMB Aug 22 11:56:51 Crockpot UserEventAgent[10]: Captive: en1: Not probing 'burnum' (protected network) Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1:169.254.80.161) DNS Proxy SMB Aug 22 11:57:07 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:57:23 Crockpot fseventsd[37]: Logging disabled completely for device:1: /Volumes/Recovery HD Aug 22 11:57:25 Crockpot kernel[0]: Kext loading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kext unloading now disabled. Aug 22 11:57:25 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) stopping Aug 22 11:57:25 Crockpot com.apple.SecurityServer[15]: Killing auth hosts Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot configd[17]: dnssd_clientstub read_all(26) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: [0x7fb025119ff0] SCNetworkReachability _llq_callback w/error=-65563 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2D_IPC: Terminated Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2DTerminate succeeded Aug 22 11:57:25 Crockpot awacsd[52]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 2 times --- Aug 22 11:57:25 Crockpot apsd[55]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: SCNC: stop, triggered by configd, type PPPSerial, reason Terminated All Aug 22 11:57:25 Crockpot configd[17]: _d2dCallback: D2D connection to mDNSResponder lost Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 4 times --- Aug 22 11:57:25 Crockpot kernel[0]: Kext autounloading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kernel requests now disabled. ... before rebooting in the middle of the safe mode startup sequence. Aug 22 12:01:10 localhost bootlog[0]: BOOT_TIME 1345662070 0 Aug 22 12:01:32 localhost kernel[0]: PMAP: PCID enabled Aug 22 12:01:32 localhost kernel[0]: Darwin Kernel Version 12.0.0: Sun Jun 24 23:00:16 PDT 2012; root:xnu-2050.7.9~1/RELEASE_X86_64 Any ideas what's causing the safe mode boot to fail? System Info MacBook Pro 8,2 2.2 Ghz Core i7 4 GM Ram Mountain Lion 10.8 500GB TOSHIBA MK5065GSXF Serial-ATA rotational disk

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • Web browsing is fast, but downloads are slow

    - by Ricket
    I work for a company on my university's campus, helping with general IT problems and some web development. But lately there has been a problem that has me and my boss completely stumped. We, plus one contractor, make up the entire IT department, so I'm reaching out to you for help. All around the office, we have wall jacks. These collect in a closet down the hall and all plug into a switch. This switch, along with our individual server jacks, plugs into another switch, and that switch plugs into our firewall hardware. Then the firewall is connected out to our campus network. Our campus internet is, well, very fast. I don't know exactly the terms, tiers, etc., but we have thousands of students and downloads can run as fast as 10 MB/s at night; uploads are sometimes even faster. I think we're practically ISP level. In short, I have a lot of faith that it is not the campus side of things that is causing a problem, combined with other evidence I'll mention in a moment. So our symptoms: web browsing is fast. Web pages, images, etc. load instantly. No problems there. But then when I go to download something, the download starts fast but very quickly (a matter of seconds) drops to nearly 0. Often it will actually drop to 0 and time out. This happens with even very small files, 1 MB or less. It smells to me like a QoS sort of thing. I'm not entirely sure, and I wanted to get your opinions first. My boss is hesitant to touch our firewall, much less let me touch it, and it was set up and is managed by a consultant remotely. These problems don't seem tied to a time of the day. I've tried downloads after 5:00 and still the same thing happens. From my desk, I can turn on my wireless adapter and pick up the campus wireless access point. If I unplug ethernet and connect to it, downloads are fast. This adds to my suspicion that it's limited to our company network. Also, a number of weeks ago the consultant upgraded our firewall firmware. Suddenly everything was very fast. I tested with downloads from Sun and speedtest.net and things were blazing fast, as they should be with our campus internet! It was wonderful, and I figured the slow speeds were an old firmware bug. In a matter of days, things steadily declined until they were back to the old symptoms. Oh, and we have antivirus installed on every computer, and we keep it up to date. Though I suppose the possibility is still there that someone could have spyware which is bogging down our internet, in which case what is the easiest/best way to find this out? (maybe this should go in a separate question) Thank you for your patience in reading all of this. Do you have any ideas as to what I can try? Is this something that you've experienced before? What sort of tools or methods can I use to try and diagnose the problem? P.S. everything here is Windows. Windows Server 2003 and 2008 on our servers, and Windows XP on employees' machines. Update: We are submitting a ticket to the university to just take a look and see if they see anything unusual and/or can suggestion methods for us to try and pinpoint our problem. Hopefully they'll be helpful! I'll update this to let you know what goes on. Update again: We found a hub (yes, a HUB) right between our campus connection and our firewall. It had only those two ethernet cables plugged into it, nothing else. After removing the hub, our speeds have jumped up to several mbps. However in talking with the campus, we got them to run a gigabit line to our firewall in place of the 100mbps line. As of friday, we are at about 65 mbps up and down (according to speedtest.net at 8am)!! Go NC State!!

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

  • Could not load file or assembly 'System.Data.SQLite'

    - by J. Pablo Fernández
    I've installed ELMAH 1.1 .Net 3.5 x64 in my ASP.NET project and now I'm getting this error (whenever I try to see any page): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format. More error details at the bottom. My Active Solution platform is "Any CPU" and I'm running on a x64 Windows 7 on an x64, of course, processor. The reason why we are using this version of ELMAH is because 1.0 .Net 3.5 (x86, which is the only platform for which it's compiled) gave us this same error on our x64 Windows server. I've tried compiling for x86 and x64 and I get the same error. I've tried removing the all compiler output (bin and obj). Finally I've made a reference to the SQLite dll directly, something that was not needed for the project to work on the server and I've got this compiler error: Error 1 Warning as Error: Assembly generation -- Referenced assembly 'System.Data.SQLite.dll' targets a different processor MyProject Any ideas what the problem might be? More error details: Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.BuildProvidersCompiler..ctor(VirtualPath configPath, Boolean supportLocalization, String outputAssemblyName) +54 System.Web.Compilation.ApplicationBuildProvider.GetGlobalAsaxBuildResult(Boolean isPrecompiledApp) +232 System.Web.Compilation.BuildManager.CompileGlobalAsax() +52 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +337 [HttpException (0x80004005): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +58 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +512 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +729 [HttpException (0x80004005): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8896783 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259

    Read the article

  • Animating the offset of the scrollView in a UICollectionView/UITableView causes prematurely disappearing cells

    - by radutzan
    We have a UICollectionView with a custom layout very similar to UITableView (it scrolls vertically). The UICollectionView displays only 3 cells simultaneously, with one of them being the currently active cell: [ 1 ] [*2*] [ 3 ] (The active cell here is #2.) The cells are roughly 280 points high, so only the active cell is fully visible on the screen. The user doesn't directly scroll the view to navigate, instead, she swipes the active cell horizontally to advance to the next cell. We then do some fancy animations and scroll the UICollectionView so the next cell is in the "active" position, thus making it the active one, moving the old one away and bringing up the next cell in the queue: [ 2 ] [*3*] [ 4 ] The problem here is setting the UICollectionView's offset. We currently set it in a UIView animation block (self.collectionView.contentOffset = targetOffset;) along with three other animating properties, which mostly works great, but causes the first cell (the previously active one, in the latter case, #2) to vanish as soon as the animation starts running, even before the delay interval completes. This is definitely not ideal. I've thought of some solutions, but can't figure out the best one: Absurdly enlarge the UICollectionView's frame to fit five cells instead of three, thus forcing it to keep the cells in memory even if they are offscreen. I've tried this and it works, but it sounds like an awfully dirty hack. Take a snapshot of the rendered content of the vanishing cell, put it in a UIImageView, add the UIImageView as a subview of the scrollView just before the cell goes away in the exact same position of the old cell, removing it once the animation ends. Sounds less sucky than the previous option (memory-wise, at least), but still kinda hacky. I also don't know the best way to accomplish this, please point me in the right direction. Switch to UIScrollView's setContentOffset:animated:. We actually used to have this, and it fixed the disappearing cell issue, but running this in parallel with the other UIView animations apparently competes for the attention of the main thread, thus creating a terribly choppy animation on single-core devices (iPhone 3GS/4). It also doesn't allow us to change the duration or easing of the animation, so it feels out of sync with the rest. Still an option if we can find a way to make it work in harmony with the UIView block animations. Switch to UICollectionView's scrollToItemAtIndexPath:atScrollPosition:animated:. Haven't tried this, but it has a big downside: it only takes 3 possible constants (that apply to this case, at least) for the target scroll position: UICollectionViewScrollPositionTop, UICollectionViewScrollPositionCenteredVertically and UICollectionViewScrollPositionBottom. The active cell could vary its height, but it always has to be 35 points from the top of the window, and these options don't provide enough control to accomplish the design. It could also potentially be just as problematic as 3.1. Still an option because there might be a way to go around the scroll position thing that I don't know of, and it might not have the same issue with the main thread, which seems unlikely. Any help will be greatly appreciated. Please ask if you need clarification. Thanks a lot!

    Read the article

  • "'0.offsetWidth' is null or not an object" - Coda Slider - Javascript Error Question

    - by bgadoci
    I implemented the Coda Slider tutorial successfully that is located here: http://jqueryfordesigners.com/coda-slider-effect/ The slider works great but I am getting a javascript error that I am not sure how to fix. The error says: '0.offsetWidth' is null or not an object coda-slider.js, line 19 character 3 Not sure how to fix it. Anyone have any ideas? Here is my js and css (don't think I need to upload the HTML but let me know if that helps). JS (coda-slider.js) // when the DOM is ready... $(document).ready(function () { var $panels = $('#slider .scrollContainer > div'); var $container = $('#slider .scrollContainer'); // if false, we'll float all the panels left and fix the width // of the container var horizontal = true; // float the panels left if we're going horizontal if (horizontal) { $panels.css({ 'float' : 'left', 'position' : 'relative' // IE fix to ensure overflow is hidden }); // calculate a new width for the container (so it holds all panels) $container.css('width', $panels[0].offsetWidth * $panels.length); <------line 19 } // collect the scroll object, at the same time apply the hidden overflow // to remove the default scrollbars that will appear var $scroll = $('#slider .scroll').css('overflow', 'hidden'); // apply our left + right buttons $scroll .before('<img class="scrollButtons left" src="/images/layout/navigation/scroll_left.png" />') .after('<img class="scrollButtons right" src="/images/layout/navigation/scroll_right.png" />'); // handle nav selection function selectNav() { $(this) .parents('ul:first') .find('a') .removeClass('selected') .end() .end() .addClass('selected'); } $('#slider .navigation').find('a').click(selectNav); // go find the navigation link that has this target and select the nav function trigger(data) { var el = $('#slider .navigation').find('a[href$="' + data.id + '"]').get(0); selectNav.call(el); } if (window.location.hash) { trigger({ id : window.location.hash.substr(1) }); } else { $('ul.navigation a:first').click(); } // offset is used to move to *exactly* the right place, since I'm using // padding on my example, I need to subtract the amount of padding to // the offset. Try removing this to get a good idea of the effect var offset = parseInt((horizontal ? $container.css('paddingTop') : $container.css('paddingLeft')) || 0) * -1; var scrollOptions = { target: $scroll, // the element that has the overflow // can be a selector which will be relative to the target items: $panels, navigation: '.navigation a', // selectors are NOT relative to document, i.e. make sure they're unique prev: 'img.left', next: 'img.right', // allow the scroll effect to run both directions axis: 'xy', onAfter: trigger, // our final callback offset: offset, // duration of the sliding effect duration: 500, // easing - can be used with the easing plugin: // http://gsgd.co.uk/sandbox/jquery/easing/ easing: 'swing' }; // apply serialScroll to the slider - we chose this plugin because it // supports// the indexed next and previous scroll along with hooking // in to our navigation. $('#slider').serialScroll(scrollOptions); // now apply localScroll to hook any other arbitrary links to trigger // the effect $.localScroll(scrollOptions); // finally, if the URL has a hash, move the slider in to position, // setting the duration to 1 because I don't want it to scroll in the // very first page load. We don't always need this, but it ensures // the positioning is absolutely spot on when the pages loads. scrollOptions.duration = 1; $.localScroll.hash(scrollOptions); }); CSS #slider { margin-left: 35px; position: relative; width: 875px; } .scroll { position: relative; width: 875px; height: 268px; overflow: auto; /* fix for IE to respect overflow */ background: #FFFFFF scroll 0; } .scrollContainer div.panel { position: relative; height: 210px; width: 875px; /* change to 560px if not using JS to remove rh.scroll */ } .scrollButtons { position: absolute; top: 115px; cursor: pointer; } .scrollButtons.left { left: -20px; } .scrollButtons.right { right: -20px; }

    Read the article

  • Android: How to track down the origin of a InflateException?

    - by Janusz
    While starting my application I get the following warning in Logcat: 04-09 10:28:17.830: WARN/WindowManager(52): Exception when adding starting window 04-09 10:28:17.830: WARN/WindowManager(52): android.view.InflateException: Binary XML file line #24: Error inflating class <unknown> 04-09 10:28:17.830: WARN/WindowManager(52): at android.view.LayoutInflater.createView(LayoutInflater.java:513) 04-09 10:28:17.830: WARN/WindowManager(52): at com.android.internal.policy.impl.PhoneLayoutInflater.onCreateView(PhoneLayoutInflater.java:56) 04-09 10:28:17.830: WARN/WindowManager(52): at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:563) 04-09 10:28:17.830: WARN/WindowManager(52): at android.view.LayoutInflater.inflate(LayoutInflater.java:385) 04-09 10:28:17.830: WARN/WindowManager(52): at android.view.LayoutInflater.inflate(LayoutInflater.java:320) 04-09 10:28:17.830: WARN/WindowManager(52): at android.view.LayoutInflater.inflate(LayoutInflater.java:276) 04-09 10:28:17.830: WARN/WindowManager(52): at com.android.internal.policy.impl.PhoneWindow.generateLayout(PhoneWindow.java:2153) 04-09 10:28:17.830: WARN/WindowManager(52): at com.android.internal.policy.impl.PhoneWindow.installDecor(PhoneWindow.java:2207) 04-09 10:28:17.830: WARN/WindowManager(52): at com.android.internal.policy.impl.PhoneWindow.getDecorView(PhoneWindow.java:1395) 04-09 10:28:17.830: WARN/WindowManager(52): at com.android.internal.policy.impl.PhoneWindowManager.addStartingWindow(PhoneWindowManager.java:818) 04-09 10:28:17.830: WARN/WindowManager(52): at com.android.server.WindowManagerService$H.handleMessage(WindowManagerService.java:8794) 04-09 10:28:17.830: WARN/WindowManager(52): at android.os.Handler.dispatchMessage(Handler.java:99) 04-09 10:28:17.830: WARN/WindowManager(52): at android.os.Looper.loop(Looper.java:123) 04-09 10:28:17.830: WARN/WindowManager(52): at com.android.server.WindowManagerService$WMThread.run(WindowManagerService.java:531) 04-09 10:28:17.830: WARN/WindowManager(52): Caused by: java.lang.reflect.InvocationTargetException 04-09 10:28:17.830: WARN/WindowManager(52): at android.widget.FrameLayout.<init>(FrameLayout.java:79) 04-09 10:28:17.830: WARN/WindowManager(52): at java.lang.reflect.Constructor.constructNative(Native Method) 04-09 10:28:17.830: WARN/WindowManager(52): at java.lang.reflect.Constructor.newInstance(Constructor.java:446) 04-09 10:28:17.830: WARN/WindowManager(52): at android.view.LayoutInflater.createView(LayoutInflater.java:500) 04-09 10:28:17.830: WARN/WindowManager(52): ... 13 more 04-09 10:28:17.830: WARN/WindowManager(52): Caused by: android.content.res.Resources$NotFoundException: Resource is not a Drawable (color or path): TypedValue{t=0x2/d=0x1010059 a=-1} 04-09 10:28:17.830: WARN/WindowManager(52): at android.content.res.Resources.loadDrawable(Resources.java:1677) 04-09 10:28:17.830: WARN/WindowManager(52): at android.content.res.TypedArray.getDrawable(TypedArray.java:548) 04-09 10:28:17.830: WARN/WindowManager(52): at android.widget.FrameLayout.<init>(FrameLayout.java:91) 04-09 10:28:17.830: WARN/WindowManager(52): ... 17 more My Application starts with the following splash screen: <?xml version="1.0" encoding="utf-8"?> <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:windowBackground="@color/white" android:background="@color/white" android:layout_width="fill_parent" android:layout_height="fill_parent" android:foregroundGravity="center"> <ImageView android:id="@+id/ImageView01" android:layout_width="fill_parent" android:layout_height="fill_parent" android:adjustViewBounds="true" android:scaleType="centerInside" android:src="@drawable/splash" android:layout_gravity="center" /> </ScrollView> Splash is the image that is shown in the splash screen. I have those four folders with for storing drawables in my app: /res/drawable-hdpi /res/drawable-ldpi /res/drawable-mdpi /res/drawable-nodpi the splash image has its own version in the first three of them and is displayed properly. Removing the src property from the ImageView removes the image but not the exception. I'm a little bit lost with where to look for the cause of the exception. I even don't know if this is really an issue in this layout file etc. How would you go about finding the cause for this warning?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >