Search Results

Search found 18321 results on 733 pages for 'excel library'.

Page 691/733 | < Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >

  • CodePlex Daily Summary for Friday, October 18, 2013

    CodePlex Daily Summary for Friday, October 18, 2013Popular ReleasesVoya Media: Voya Media v. 1.2.255: Voya Media is free, open-source and provides one central place to play and organize all your music, pictures and videos. Voya Media | Twitter | Facebook | Google+ Features: Scans disk drives and network shares for valid media files. Generates playlist tags based on directory names. Supports MP3 tags and cover images. Supports built-in and external SRT/SUB subtitles. Gets TV Show episode details from the tvrage.com API. Gets Movie details from the themoviedb.org API. Supports Med...Aphid: Aphid 0.1.0.17 Alpha: Added several samples Fixed + operator bugpatterns & practices - Windows Azure Guidance: Cloud Design Patterns: 1st drop of Cloud Design Patterns project. It contains 14 patterns with 6 related guidance.Media Companion: Media Companion MC3.582b: New* Both - Added 'An' as option to ignore in title * Movie - Renaming - added %Z - Sorttitle to Legend * Movie - Renaming - added %O - Audio Channels to Legend * Movie - Remove a poster source from priority list. Reset List back to defaults. * Made Media Companion truly portable application. Fixed* Movie - browse for Poster Or Fanart, allows for jpg, tbn, png and bmp images * Movie - Alt Fanart Browser - Url or Browse window now fully visible. * Movie - Manual renaming checks if 'Use Fold...Json.NET: Json.NET 5.0 Release 8: Fix - Fixed not writing string quotes when QuoteName is falsePowerShell Community Extensions: 3.1 Production: PowerShell Community Extensions 3.1 Release NotesOct 17, 2013 This version of PSCX supports Windows PowerShell 3.0 and 4.0 See the ReleaseNotes.txt download above for more information.SQL Power Doc: Version 1.0.2.1: Misc. bug fixes Added logic to resolve members of a Windows Group server login Added columns to Excel workbooks to show definitions for server permissions, server roles, database permissions, and database rolesNB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store v2.3.8 Rel1: v2.3.8 Is now DNN6 and DNN7 compatible NOTE: NB_Store v2.3.8 is NOT compatible with DNN5. SOURCE CODE : https://github.com/leedavi/NB_Store (Source code has been moved to GitHub, due to issues with codeplex SVN and the inability to move easily to GIT on codeplex)Social Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyTerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...Collection Commander for Configuration Manager 2012: CMCollCtr 1.0.0: Change log: - MSI Setup - UI Improved - CM12 Console integration - New Powershell code snippets - Client Center IntegrationSandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...Fast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...Wsus Package Publisher: Release v1.3.1310.12: Allow the Update Creation Wizard to be set in full screen mode. Fix a bug which prevent WPP to Reset Remote Sus Client ID. Change the behavior of links in the Update Detail Viewer. Left-Click to open, Right-Click to copy to the Clipboard.WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.6.maint: I think this covers all of the issues. new additions: fixed the thumbnail problem for backgrounds. general clean up and error checking. need to get this put through the wringer and all feedback is welcome.BIDS Helper: BIDS Helper 1.6.4: This BIDS Helper release brings the following new features and fixes: New Features: A new Bus Matrix style report option when you run the Printer Friendly Dimension Usage report for an SSAS cube. The Biml engine is now fully in sync with the supported subset of Varigence Mist 3.4. This includes a large number of language enhancements, bugfixes, and project deployment support. Fixed Issues: Fixed Biml execution for project connections fixing a bug with Tabular Translations Editor not a...Free language translator and file converter: Free Language Translator 3.4: fixes for new version look up.New ProjectsA Program to Calculate the Sum of Two Numbers: Dennis Gaya - University of HertfordshireAllow users to change their Active Directory password from SharePoint 2013: Allow users to change their Active Directory password from SharePoint 2013 user menu contextAutoCAD Code Pack: AutoCADCodePack is a powerful library that help you to develop AutoCAD plugins using the AutoCAD .NET API.Core Engine: A game engine, written by engineers, for engineers.EmeraldForum: A simple forumFcdbas: FcdbasFreedom: ...High School Behaviour Log: A web based tool for recording student detentions, behaviour issues as well as referrals and positive behaviour. Integrated with school's SIMS.net MIS system.htmlcxx unicode version: thanks for Davi de Castro Reis and Robson Braga Ara?o's work. also thanks for Kasper Peeters 's tree.hhLets Learn DirectX 11.1 Series: Lets Learn DirectX 11.1 is a project run by Ernest Loveland to make knowledge about DirectX 11.1 development more public and freely available.Migrate CPPUnit to VS2013: migrate CPPUnit to VS2013Poll System: Poll system for educational purposePort Layer: ????????,???????R.M.B: JUST XIXIRecetas_LosBorbotones: Recetas Los BorbotonesScrolltastic-Skin: Imagine A Free DotNetNuke responsive Skin, Which Will Make Your Wishes Come True. Sharepoint and web service: This is Sharepoint 2010 project, after deploy it, you can use javascript (ajax) to access sharepoint list, call a custom web service. Everything are very easy!SourceSareProject: 00TestWebApi: WebApi serviceTFSBuildMonitor: Application that in its current shape let you monitor build queues and finished builds in an easier way than what is possible throug the built in Build ExplorerTigerProjects: This project is to create a online RSS reader similar as Google ReaderVottReader: It is project simple reader and viewer for http://vott.ru site.WinFormStudy: C#??

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • Unable to Sign in to the Microsoft Online Services Signin application from Windows 7 client located behind ISA firewall

    - by Ravindra Pamidi
    A while ago i helped a customer troubleshoot authentication problem with Microsoft Online Services Signin application.  This customer was evaluating Microsoft BPOS (Business Productivity Online Services) and was having trouble using the single sign on application behind ISA 2004 firewall.The network structure is fairly simple with single Windows 2003 Active Directory domain and Windows 7 clients. On a successful logon to the Microsoft Online Services Signin application, this application provides single signon functionality to all of Microsoft online services in the BPOS package. Symptoms:When trying to signin it fails with error "The service is currently unavailable. Please try again later. If problems continue, contact your service administrator". If ISA 2004 firewall is removed from the picture the authentication succeeds.Troubleshooting: Enabled ISA Server firewall logging along with Microsoft Network Monitor tool on the Windows 7 Client while reproducing the issue. Analysis of the ISA Server Firewall logs and Microsoft Network capture revealed that the Microsoft Online Services Sign In application when sending request to ISA Server does not send the domain credentials and as a result ISA Server responds with an error code of HTTP 407 Proxy authentication required listing out the supported authentication mechanisms.  The application in question is expected to send the credentials of the domain user in response to this request. However in this case, it fails to send the logged on user's domain credentials. Bit of researching on the Internet revealed that The "Microsoft Online Services Sign In" application by default does not support Outbound Internet Proxy authentication. In order for it to send the logged on user's domain credentials we had to make  changes to its configuration file "SignIn.exe.config" located under "Program Files\Microsoft Online Services\Sign In" folder. Step by Step details to configure the configuration file are documented on Microsoft TechNet website given below.  Configure your outbound authenticating proxy serverhttp://www.microsoft.com/online/help/en-us/helphowto/cc54100d-d149-45a9-8e96-f248ecb1b596.htm After the above problem was addressed we were still not able to use the "Microsoft Online Services Sign In" application and it failed with the same error.  Analysis of another network capture revealed that the application in question is now sending the required credentials and the connection seems to terminate at a later stage. Enabled verbose logging for the "Microsoft Online Services Sign In" application and then reproduced the problem. Analysis of the logs revealed a time difference between the local client and Microsoft Online services server of around seven minutes which is above the acceptable time skew of five minutes. Excerpt from Microsoft Online Services Sign In application verbose log:  1/26/2012 1:57:51 PM Verbose SingleSignOn.GetSSOGenericInterface SSO Interface URL: https://signinservice.apac.microsoftonline.com/ssoservice/UID1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn The security timestamp is invalid because its creation time ('2012-01-26T08:34:52.767Z') is in the future. Current time is '2012-01-26T08:27:52.987Z' and allowed clock skew is '00:05:00'.1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn  Although the Windows 7 Clients successfully synchronized time to the domain controller for the domain, the domain controller was not configured to synchronize time with external NTP servers. This caused a gradual drift in time on the network thus resulting in the above issue. Reconfigured the domain controller holding the PDC FSMO role to synchronize time with external time source ( time.nist.gov ) and edited the system policy on the ISA server firewall to allow NTP traffic to time.nist.gov Configure the time source for the forest:Windows Time Servicehttp://technet.microsoft.com/en-us/library/cc794937(WS.10).aspx Forced synchronization of Windows time using the command w32tm /resync on the domain controller and later on the clients each of which had corrected the seven minutes difference. This resolved the problem with logon to Microsoft Online Services Sign In.

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • Simple Merging Of PDF Documents with iTextSharp 5.4.5.0

    - by Mladen Prajdic
    As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets. The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 5.4.5.0 This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files. The first parameter is the name of the target PDF file that will be created. The second parameter is the directory containing PDF files to be merged into a single file. using iTextSharp.text; using iTextSharp.text.pdf; using System; using System.IO; namespace MergeSpeedPASS { class Program { static void Main(string[] args) { if (args.Length == 0 || args[0] == "-h" || args[0] == "/h") { Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 5.4.5.0."); Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs."); Console.WriteLine(""); Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir"); Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension."); Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs"); Console.WriteLine(""); Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles"); } else if (args.Length == 2) CreateMergedPDF(args[0], args[1]); Console.WriteLine(""); Console.WriteLine("Press any key to exit..."); Console.Read(); } static void CreateMergedPDF(string targetPDF, string sourceDir) { using (FileStream stream = new FileStream(targetPDF, FileMode.Create)) { Document pdfDoc = new Document(PageSize.A4); PdfCopy pdf = new PdfCopy(pdfDoc, stream); pdfDoc.Open(); var files = Directory.GetFiles(sourceDir); Console.WriteLine("Merging files count: " + files.Length); int i = 1; foreach (string file in files) { Console.WriteLine(i + ". Adding: " + file); pdf.AddDocument(new PdfReader(file)); i++; } if (pdfDoc != null) pdfDoc.Close(); Console.WriteLine("SpeedPASS PDF merge complete."); } } } } Hope it helps you and have fun.

    Read the article

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • SharePoint: Numeric/Integer Site Column (Field) Types

    - by CharlesLee
    What field type should you use when creating number based site columns as part of a SharePoint feature? Windows SharePoint Services 3.0 provides you with an extensible and flexible method of developing and deploying Site Columns and Content Types (both of which are required for most SharePoint projects requiring list or library based data storage) via the feature framework (more on this in my next full article.) However there is an interesting behaviour when working with a column or field which is required to hold a number, which I thought I would blog about today. When creating Site Columns in the browser you get a nice rich UI in order to choose the properties of this field: However when you are recreating this as a feature defined in CAML (Collaborative Application Mark-up Language), which is a type of XML (more on this in my article) then you do not get such a rich experience.  You would need to add something like this to the element manifest defined in your feature: <Field SourceID="http://schemas.microsoft.com/sharepoint/3.0"        ID="{C272E927-3748-48db-8FC0-6C7B72A6D220}"        Group="My Site Columns"        Name="MyNumber"        DisplayName="My Number"        Type="Numeric"        Commas="FALSE"        Decimals="0"        Required="FALSE"        ReadOnly="FALSE"        Sealed="FALSE"        Hidden="FALSE" /> OK, its not as nice as the browser UI but I can deal with this. Hang on. Commas="FALSE" and yet for my number 1234 I get 1,234.  That is not what I wanted or expected.  What gives? The answer lies in the difference between a type of "Numeric" which is an implementation of the SPFieldNumber class and "Integer" which does not correspond to a given SPField class but rather represents a positive or negative integer.  The numeric type does not respect the settings of Commas or NegativeFormat (which defines how to display negative numbers.)  So we can set the Type to Integer and we are good to go.  Yes? Sadly no! You will notice at this point that if you deploy your site column into SharePoint something has gone wrong.  Your site column is not listed in the Site Column Gallery.  The deployment must have failed then?  But no, a quick look at the site columns via the API reveals that the column is there.  What new evil is this?  Unfortunately the base type for integer fields has this lovely attribute set on it: UserCreatable = FALSE So WSS 3.0 accordingly hides your field in the gallery as you cannot create fields of this type. However! You can use them in content types just like any other field (except not in the browser UI), and if you add them to the content type as part of your feature then they will show up in the UI as a field on that content type.  Most of the time you are not going to be too concerned that your site columns are not listed in the gallery as you will know that they are there and that they are still useable. So not as bad as you thought after all.  Just a little quirky.  But that is SharePoint for you.

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • CodePlex Daily Summary for Wednesday, August 20, 2014

    CodePlex Daily Summary for Wednesday, August 20, 2014Popular ReleasesMolGridCal & MolCal: MolGridCal tutorial v1.1: Update the contents for grid computing virtual screening.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.SQL Scripts to Improve DNN Performance: Turbo Scripts 0.9.1 for DNN 7.1.2 - 7.3.2: Support for DNN 7.3.2 added Fixed a typo in GetVendor procedure New goodies TurboDbConfig and TurboDNNSettings added (re-uploaded with fixed file name)The IRIS Toolbox Project: IRIS Toolbox Release 20140819: Regular Updates Issue 66 (66) closed, bug fixed. Calls to irisstartup and iriscleanup occasionally crashed when older IRIS versions were on the search path. Issue 65 (65) closed, bug fixed. The function tseries/convert crashed when called with 'method=' 'first' or 'last'.ConEmu - Windows console with tabs: ConEmu 140819 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksLinq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Magick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.New ProjectsCryptography Enumerations JavaScript Shell: Javascript shell for CAPICOM enumsFamiglia Information Server: Famiglia ProjectHustle: Hustle is a small utility that can be used to manage threads when executing command line tools. The tool was built to take advantage parallel loading in TM1.LuckyDraw: an lucky drawer for 2nd Sudoku Competition Powered by PythonMachina Aurum WebDriver Powershell: Powershell commands to the http://www.w3.org/TR/webdriver/ specification (works on Internet Explorer 11 and Chrome)ODBC Connect: ODBC Connect is a Windows user interface for 32bit and 64bit ODBC data sources. Generate SQL statements, export to file, run from the command line to automate.Post Publisher: Post Publisher is a post publishing platform. This is a work-in-progress.Samurai Framework: Samurai is a game framework written in C# using OpenGL for graphics. It aims to be a higher level abstraction rather than just a simple OpenGL binding.SharePoint 2013 tokenizing user/group autocomplete: SharePoint 2013 tokenizing user/group autocomplete plugin allow user to search/select users/groups.SharePoint 2013 User Lync Presence using jQuery: SharePoint 2013 User Lync Presence jQuery plugin renders the presence for particular user with or without picture.SharePoint Search Box User/Group AutoComplete: SharePoint Search box User/Group autocomplete is a jQuery plugin to quick search the user/group from OOB search input control.TI Helper: TI Helper is an Auto Hot Key script that can be used to enhance the options in the IBM Cognos TM1 Turbo Integrator (TI) process editor.trade-software: for trading educationWPF Graphing: CUSTOME GRAPHING SOLUTIONYoutubeChannelDownloader: I love to see video on my favorite channel,and I want to download videos on this channel to watch later. So I made this app.

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • Running Powershell from within SharePoint

    - by Norgean
    Just because something is a daft idea, doesn't mean it can't be done. We sometimes need to do some housekeeping - like delete old files or list items or… yes, well, whatever you use Powershell for in a SharePoint world. Or it could be that your solution has "issues" for which you have Powershell solutions, but not the budget to transform into proper bug fixes. So you create a "how to" for the ITPro guys. Idea: What if we keep the scripts in a list, and have SharePoint execute the scripts on demand? An announcements list (because of the multiline body field). Warning! Let us be clear. This list needs to be locked down; if somebody creates a malicious script and you run it, I cannot help you. First; we need to figure out how to start Powershell scripts from C#. Hit teh interwebs and the Googlie, and you may find jpmik's post: http://www.codeproject.com/Articles/18229/How-to-run-PowerShell-scripts-from-C. (Or MS' official answer at http://msdn.microsoft.com/en-us/library/ee706563(v=vs.85).aspx) public string RunPowershell(string powershellText, SPWeb web, string param1, string param2) { // Powershell ~= RunspaceFactory - i.e. Create a powershell context var runspace = RunspaceFactory.CreateRunspace(); var resultString = new StringBuilder(); try { // load the SharePoint snapin - Note: you cannot do this in the script itself (i.e. add-pssnapin etc does not work) PSSnapInException snapInError; runspace.RunspaceConfiguration.AddPSSnapIn("Microsoft.SharePoint.PowerShell", out snapInError); runspace.Open(); // set a web variable. runspace.SessionStateProxy.SetVariable("webContext", web); // and some user defined parameters runspace.SessionStateProxy.SetVariable("param1", param1); runspace.SessionStateProxy.SetVariable("param2", param2); var pipeline = runspace.CreatePipeline(); pipeline.Commands.AddScript(powershellText); // add a "return" variable pipeline.Commands.Add("Out-String"); // execute! var results = pipeline.Invoke(); // convert the script result into a single string foreach (PSObject obj in results) { resultString.AppendLine(obj.ToString()); } } finally { // close the runspace runspace.Close(); } // consider logging the result. Or something. return resultString.ToString(); } Ok. We've written some code. Let us test it. var runner = new PowershellRunner(); runner.RunPowershellScript(@" $web = Get-SPWeb 'http://server/web' # or $webContext $web.Title = $param1 $web.Update() $web.Dispose() ", null, "New title", "not used"); Next step: Connect the code to the list, or more specifically, have the code execute on one (or several) list items. As there are more options than readers, I'll leave this as an exercise for the reader. Some alternatives: Create a ribbon button that calls RunPowershell with the body of the selected itemsAdd a layout pageSpecify list item from query string (possibly coupled with content editor webpart with html that links directly to this page with querystring)WebpartListing with an "execute" columnList with multiselect and an execute button Etc!Now that you have the code for executing powershell scripts, you can easily expand this into a timer job, which executes scripts at regular intervals. But if the previous solution was dangerous, this is even worse - the scripts will usually be run with one of the admin accounts, and can do pretty much anything...One more thing... Note that as this is running "consoleless" calls to Write-Host will fail. Two solutions; remove all output, or check if the script is run in a console-window or not.  if ($host.Name -eq "ConsoleHost") { Write-Host 'If I agreed with you we'd both be wrong' }

    Read the article

  • eSTEP Newsletter November 2012

    - by mseika
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4; Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11; Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud HandbookYou find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • AABB vs OBB Collision Resolution jitter on corners

    - by patt4179
    I've implemented a collision library for a character who is an AABB and am resolving collisions between AABB vs AABB and AABB vs OBB. I wanted slopes for certain sections, so I've toyed around with using several OBBs to make one, and it's working great except for one glaring issue; The collision resolution on the corner of an OBB makes the player's AABB jitter up and down constantly. I've tried a few things I've thought of, but I just can't wrap my head around what's going on exactly. Here's a video of what's happening as well as my code: Here's the function to get the collision resolution (I'm likely not doing this the right way, so this may be where the issue lies): public Vector2 GetCollisionResolveAmount(RectangleCollisionObject resolvedObject, OrientedRectangleCollisionObject b) { Vector2 overlap = Vector2.Zero; LineSegment edge = GetOrientedRectangleEdge(b, 0); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.X = axisRange.Maximum - rProjection.Minimum; } else { overlap.X = rProjection.Maximum - axisRange.Minimum; overlap.X = -overlap.X; } } edge = GetOrientedRectangleEdge(b, 1); if (!SeparatingAxisForRectangle(edge, resolvedObject)) { LineSegment rEdgeA = new LineSegment(), rEdgeB = new LineSegment(); Range axisRange = new Range(), rEdgeARange = new Range(), rEdgeBRange = new Range(), rProjection = new Range(); Vector2 n = edge.PointA - edge.PointB; rEdgeA.PointA = RectangleCorner(resolvedObject, 0); rEdgeA.PointB = RectangleCorner(resolvedObject, 1); rEdgeB.PointA = RectangleCorner(resolvedObject, 2); rEdgeB.PointB = RectangleCorner(resolvedObject, 3); rEdgeARange = ProjectLineSegment(rEdgeA, n); rEdgeBRange = ProjectLineSegment(rEdgeB, n); rProjection = GetRangeHull(rEdgeARange, rEdgeBRange); axisRange = ProjectLineSegment(edge, n); float axisMid = (axisRange.Maximum + axisRange.Minimum) / 2; float projectionMid = (rProjection.Maximum + rProjection.Minimum) / 2; if (projectionMid > axisMid) { overlap.Y = axisRange.Maximum - rProjection.Minimum; overlap.Y = -overlap.Y; } else { overlap.Y = rProjection.Maximum - axisRange.Minimum; } } return overlap; } And here is what I'm doing to resolve it right now: if (collisionDetection.OrientedRectangleAndRectangleCollide(obb, player.PlayerCollision)) { var resolveAmount = collisionDetection.GetCollisionResolveAmount(player.PlayerCollision, obb); if (Math.Abs(resolveAmount.Y) < Math.Abs(resolveAmount.X)) { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else if (Math.Abs(resolveAmount.Y) <= 30.0f) //Catch cases where the player should be able to step over the top of something { var roundedAmount = (float)Math.Floor(resolveAmount.Y); player.PlayerCollision._position.Y -= roundedAmount; } else { var roundedAmount = (float)Math.Floor(resolveAmount.X); player.PlayerCollision._position.X -= roundedAmount; } } Can anyone see what might be the issue here, or has anyone experienced this before that knows a possible solution? I've tried for a few days to figure this out on my own, but I'm just stumped.

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • Using 3rd Party JavaScript Plugins Hardwired With &lsquo;document.write&rsquo;

    - by ToStringTheory
    Introduction Have you ever had the need to implement a 3rd party JavaScript plugin, but your needs didn’t fit the model and usage defined by the API or documentation of the plugin?  Recently I ran into this issue when I was trying to implement a web snapshot plugin into our site.  To use their plugin, you had to include a script tag to the plugin on their server with an API key.  The second part of the usage was to include a <script> tag around a function call wherever you wanted a snapshot to appear. The Problem When trying to use the service, the images did not display.  I checked a couple of things and didn’t find anything wrong at first..  It wasn’t until I looked at the function that was called by the inline script did I find the issue – a call to the webservice, followed by a call to ‘document.write’ in its callback.  The solution in which I was trying to implement the plugin happened to be in response to an AJAX call after the document had completely loaded.  After the page has loaded, document.write does nothing. My first thought for a solution was to just cache the script from the service, and edit it do something like a return function or callback that I could use to edit the document from.  However, I quickly discovered that there is no way to cache the script from the service, as it had a hash in the function where it would call the server.  The hash was updated every few seconds/minutes, expiring old hashes.  This meant that I wouldn’t be able to edit the script and upload a new version to my server, as the script would not work after a few minutes from originally getting the script from the service. Solution The solution eluded me until I realized that this was JavaScript I was dealing with.  A language designed so that you could do just about anything to any library, function, or object…  At this point, the solution was simple – take control of the document.write function.  Using a buffer variable, and a simple function call, it is eerily simple to perform: //what would have been output to the document var buffer = ""; //store a reference to the real document.write var dw = document.write; //redefine document.write to store to our buffer document.write = function (str) {buffer += str;} //execute the function containing calls to document.write eval('{function encapsulated in <script></script> tags}'); //restore the original document.write function (just in case) document.write = dw; That’s it.  Instead of using the script tags where I wanted to include a snapshot, I called a function passing in the URL to the page I wanted a snapshot of.  After that last line of code, what would have been output to the document (or not in the case of the ajax call) was instead stored in buffer. Conclusion While the solution itself is simple, coming from a background much more footed in the .Net platform, I believe that this is a prime example of always keeping the language that you are working in in mind.  While this may seem obvious at first, as I KNEW I was in JavaScript, I never thought of taking control of the document.write function because I am more accustomed to the .Net world.  I can’t simply replace the functionality of Console.WriteLine.

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • Is the Leptonica implementation of 'Modified Median Cut' not using the median at all?

    - by TheCodeJunkie
    I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm. I've been reading the code of the C implementation in Leptonica library and came across something I thought was a bit odd. Now I want to stress that I am far from an expert in this area, not am I a math-head, so I am predicting that this all comes down to me not understanding all of it and not that the implementation of the algorithm is wrong at all. The algorithm states that the vbox should be split along the lagest axis and that it should be split using the following logic The largest axis is divided by locating the bin with the median pixel (by population), selecting the longer side, and dividing in the center of that side. We could have simply put the bin with the median pixel in the shorter side, but in the early stages of subdivision, this tends to put low density clusters (that are not considered in the subdivision) in the same vbox as part of a high density cluster that will outvote it in median vbox color, even with future median-based subdivisions. The algorithm used here is particularly important in early subdivisions, and 3is useful for giving visible but low population color clusters their own vbox. This has little effect on the subdivision of high density clusters, which ultimately will have roughly equal population in their vboxes. For the sake of the argument, let's assume that we have a vbox that we are in the process of splitting and that the red axis is the largest. In the Leptonica algorithm, on line 01297, the code appears to do the following Iterate over all the possible green and blue variations of the red color For each iteration it adds to the total number of pixels (population) it's found along the red axis For each red color it sum up the population of the current red and the previous ones, thus storing an accumulated value, for each red note: when I say 'red' I mean each point along the axis that is covered by the iteration, the actual color may not be red but contains a certain amount of red So for the sake of illustration, assume we have 9 "bins" along the red axis and that they have the following populations 4 8 20 16 1 9 12 8 8 After the iteration of all red bins, the partialsum array will contain the following count for the bins mentioned above 4 12 32 48 49 58 70 78 86 And total would have a value of 86 Once that's done it's time to perform the actual median cut and for the red axis this is performed on line 01346 It iterates over bins and check they accumulated sum. And here's the part that throws me of from the description of the algorithm. It looks for the first bin that has a value that is greater than total/2 Wouldn't total/2 mean that it is looking for a bin that has a value that is greater than the average value and not the median ? The median for the above bins would be 49 The use of 43 or 49 could potentially have a huge impact on how the boxes are split, even though the algorithm then proceeds by moving to the center of the larger side of where the matched value was.. Another thing that puzzles me a bit is that the paper specified that the bin with the median value should be located, but does not mention how to proceed if there are an even number of bins.. the median would be the result of (a+b)/2 and it's not guaranteed that any of the bins contains that population count. So this is what makes me thing that there are some approximations going on that are negligible because of how the split actually takes part at the center of the larger side of the selected bin. Sorry if it got a bit long winded, but I wanted to be as thoroughas I could because it's been driving me nuts for a couple of days now ;)

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • CodePlex Daily Summary for Tuesday, August 19, 2014

    CodePlex Daily Summary for Tuesday, August 19, 2014Popular ReleasesVG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksLinq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Magick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...OpenCppCoverage: OpenCppCoverage 0.9.1: - Add Jenkins support. - Command line argument can be placed inside a config file. If you do not have Visual Studio C++ 2013 you need to download redistributable packages: http://www.microsoft.com/en-us/download/details.aspx?id=40784Easy Backup Windows Service: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Force run program as administrator by default. Add 'everyday' schedule element. Update solution to VS 2013.Easy Backup Application: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Fix app location initialization. Force run program as administrator by default. Update solution to VS 2013.TEBookConverter: 1.5: Added: Turkish and French translations Added: A few interface changes Removed: SkinDynamulet: Dynamulet v0.1: DynamoDB Transaction Server v0.1Console parallel nunit tests runner: ConsoleUnitTestsRunner 1.03: bugfixingFluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.MFCMAPI: August 2014 Release: Build: 15.0.0.1042 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010/2013 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeNew Projectsangle.works: Angle.works sample solutionAutomatically Update CMS from SCOM: Microsoft Central Management Server (CMS) was released with SQL Server 2008; however keeping the server list up-to-date is challenging.BLE for NETMF: Class library targeted to .NET Micro Framework for BLE support (both central and peripheral profiles).CIIS: ciisClipboard Calculator: Simple tool to calculate numbers from the clipboard. Useful for SQL Server Management Studio User.CYCLOPS: CYCLOPS is a project that seeks to aid the development and testing of ANPR software. CYCLOPS provides a virtual camera and virtual frame grabber for image procedemo4: demo4eAsset: Web based Fixed Asset Management SystemEFML-Runtime: Create applications with xml,css,js feeadmin: Complete fee managementFinal Fantasy VI Randomizer: This is a simple program that randomizes skills, items, and characters in Final Fantasy VI. It is used for racing the game.Object Editor: Object Editor is a visual editor which allows to edit any properties of any class and to simplify the changing and displaying of a class instance.pbb: Pinoy Badminton Buddies Web ApplicationRubyFeed: RubyFeed is a stand-alone HTML5 page for Internet Explorer on Windows 8 and above that makes it easy to view and manage your Windows RSS Platform feeds.

    Read the article

  • Why is 0 false?

    - by Morwenn
    This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages? String comparison Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code: if (strcmp(str1, str2)) { // Do something... } Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time. Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one: sign operator<=>(const std::string& lhs, const std::string& rhs); Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way: sign operator==(const std::string& lhs, const std::string& rhs); This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed: if (str1 == str2) { // Do something... } Old errors handling We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things: #define TRUE 0 // ... if (some_function() == TRUE) { // Here, TRUE would mean success... // Do something } If we think about how we think in programming, we often have the following reasoning pattern: Do something Did it work? Yes -> That's ok, one case to handle No -> Why? Many cases to handle If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense. Conclusion My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of? Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)

    Read the article

  • Nginx1.6.1: Installing from source not running corectly

    - by Maca
    I just installed Nginx 1.6.1 from source but it didn't seem to installed correctly. Nginx is running if I service nginx status but when I do nginx -v it outputs command not found. Regular HTML page shows fine and there is no error in the error logs. I am on AWS Ec2 linux AMI. Here is my /etc/init.d/nginx script #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /usr/local/nginx/logs/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/local/nginx/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/usr/local/nginx/logs/nginx.lock make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac

    Read the article

  • Update php 5.2.0 to 5.2.4 with aptitude

    - by Kiva
    Hi guy, I would like to update my php 5 in my server. At this moment, I use php 5.2.0 so I want to update it to php 5.2.4 (not php 5.3). I tried to do this: aptitude update aptitude upgrade 63 packets were updated but not php which is always in 5.0 How can I update my php please ? Here is the output of commands asked by David in another post: aptitude search php5 p libapache-mod-php5 - server-side, HTML-embedded scripting langu i A libapache2-mod-php5 - server-side, HTML-embedded scripting langu i php5 - server-side, HTML-embedded scripting langu p php5-apache2-mod-bt - PHP bindings for mod_bt p php5-auth-pam - A PHP5 extension for PAM authentication i php5-cgi - server-side, HTML-embedded scripting langu p php5-clamavlib - PHP ClamAV Lib - ClamAV Interface for PHP5 p php5-cli - command-line interpreter for the php5 scri i A php5-common - Common files for packages built from the p i php5-curl - CURL module for php5 p php5-dev - Files for PHP5 module development i A php5-gd - GD module for php5 p php5-idn - PHP api for the IDNA library p php5-imagick - ImageMagick module for php5 p php5-imap - IMAP module for php5 p php5-interbase - interbase/firebird module for php5 p php5-json - JSON serialiser for PHP5 p php5-ldap - LDAP module for php5 p php5-mapscript - module for php5-cgi to use mapserver p php5-maxdb - PHP extension to access MaxDB databases fo i A php5-mcrypt - MCrypt module for php5 p php5-memcache - memcache extension module for PHP5 p php5-mhash - MHASH module for php5 p php5-ming - Ming module for php5 i A php5-mysql - MySQL module for php5 p php5-odbc - ODBC module for php5 p php5-pgsql - PostgreSQL module for php5 p php5-ps - ps module for PHP 5 p php5-pspell - pspell module for php5 p php5-radius - PECL radius module for PHP 5 p php5-recode - recode module for php5 p php5-snmp - SNMP module for php5 p php5-sqlite - SQLite module for php5 p php5-sqlite3 - SQLite3 module for php5 p php5-sqlrelay - SQL Relay PHP API p php5-suhosin - advanced protection module for php5 p php5-sybase - Sybase / MS SQL Server module for php5 p php5-tidy - tidy module for php5 p php5-uuid - OSSP uuid module for php5 p php5-xapian - Xapian search engine interface for PHP5 p php5-xcache - Fast, stable PHP opcode cacher p php5-xmlrpc - XML-RPC module for php5 p php5-xsl - XSL module for php5 aptitude show php5 | grep Version Version : 5.2.0-8+etch13 aptitude show php5-cgi | grep Version Version : 5.2.0-8+etch13 php5 --version -bash: php5: command not found php-cgi --version PHP 5.2.0-8+etch13 (cgi-fcgi) (built: Oct 2 2008 08:21:17) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2006 Zend Technologies

    Read the article

  • Can't seem to install imagick

    - by PolishHurricane
    I'm trying to install the PHP PEAR PECL extension "imagick" (image magick), but failing horribly. It seems that I keed installing packages to progress, but this one has me stumped. It seems to fail all the way at the bottom. Please Note: I'm using ArchLinux, apt-get doesn't save me. [root@Crux tmp]# pecl install imagick downloading imagick-3.0.1.tgz ... Starting to download imagick-3.0.1.tgz (93,920 bytes) .....................done: 93,920 bytes 13 source files, building running: phpize Configuring for: PHP Api Version: 20100412 Zend Module Api No: 20100525 Zend Extension Api No: 220100525 Please provide the prefix of Imagemagick installation [autodetect] : building in /tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1 running: /tmp/pear/temp/imagick/configure --with-imagick checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/modules checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... 0.13.5 (ok) checking for gawk... gawk checking whether to enable the imagick extension... yes, shared checking whether to enable the imagick GraphicsMagick backend... no checking ImageMagick MagickWand API configuration program... found in /usr/bin/MagickWand-config checking if ImageMagick version is at least 6.2.4... found version 6.7.8 Q16 checking for MagickWand.h header file... found in /usr/include/ImageMagick/wand/MagickWand.h checking PHP version is at least 5.1.3... yes. found 5.4.6 checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 1572864 checking command to parse /usr/bin/nm -B output from cc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC checking if cc PIC flag -fPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking whether the cc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/libtool --mode=compile cc -I. -I/tmp/pear/temp/imagick -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/include -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/main -I/tmp/pear/temp/imagick -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -I/usr/include/ImageMagick -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/imagick/imagick_class.c -o imagick_class.lo mkdir .libs cc -I. -I/tmp/pear/temp/imagick -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/include -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/main -I/tmp/pear/temp/imagick -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -I/usr/include/ImageMagick -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/imagick/imagick_class.c -fPIC -DPIC -o .libs/imagick_class.o /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagematteâ: /tmp/pear/temp/imagick/imagick_class.c:268:2: warning: âMagickGetImageMatteâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:82) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsizeoffsetâ: /tmp/pear/temp/imagick/imagick_class.c:406:2: warning: passing argument 2 of âMagickGetSizeOffsetâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:87:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_paintfloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:1016:3: warning: âMagickPaintFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:99) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:1019:3: warning: âMagickPaintFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:99) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagepropertiesâ: /tmp/pear/temp/imagick/imagick_class.c:1083:2: warning: passing argument 3 of âMagickGetImagePropertiesâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:35:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageprofilesâ: /tmp/pear/temp/imagick/imagick_class.c:1131:2: warning: passing argument 3 of âMagickGetImageProfilesâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:33:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_recolorimageâ: /tmp/pear/temp/imagick/imagick_class.c:1402:2: warning: âMagickRecolorImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:109) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setfontâ: /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âstruct _php_core_globalsâ has no member named âsafe_modeâ /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âCHECKUID_CHECK_FILE_AND_DIRâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c:1442:3: note: each undeclared identifier is reported only once for each function it appears in /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âCHECKUID_NO_ERRORSâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_queryformatsâ: /tmp/pear/temp/imagick/imagick_class.c:2580:2: warning: passing argument 2 of âMagickQueryFormatsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:41:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_queryfontsâ: /tmp/pear/temp/imagick/imagick_class.c:2607:2: warning: passing argument 2 of âMagickQueryFontsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:40:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_colorfloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:3396:2: warning: âMagickColorFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:75) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mapimageâ: /tmp/pear/temp/imagick/imagick_class.c:3730:2: warning: âMagickMapImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:86) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mattefloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:3763:2: warning: âMagickMatteFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:88) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_medianfilterimageâ: /tmp/pear/temp/imagick/imagick_class.c:3790:2: warning: âMagickMedianFilterImageâ is deprecated (declared at /usr/include/ImageMagick/wand/magick-image.h:217) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_paintopaqueimageâ: /tmp/pear/temp/imagick/imagick_class.c:3853:2: warning: âMagickPaintOpaqueImageChannelâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:104) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_painttransparentimageâ: /tmp/pear/temp/imagick/imagick_class.c:3916:2: warning: âMagickPaintTransparentImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:107) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_reducenoiseimageâ: /tmp/pear/temp/imagick/imagick_class.c:4059:2: warning: âMagickReduceNoiseImageâ is deprecated (declared at /usr/include/ImageMagick/wand/magick-image.h:266) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageattributeâ: /tmp/pear/temp/imagick/imagick_class.c:5068:2: warning: âMagickGetImageAttributeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:59) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagechannelextremaâ: /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: âMagickGetImageChannelExtremaâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:78) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: passing argument 3 of âMagickGetImageChannelExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:78:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: passing argument 4 of âMagickGetImageChannelExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:78:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageextremaâ: /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: âMagickGetImageExtremaâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:80) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: passing argument 2 of âMagickGetImageExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:80:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: passing argument 3 of âMagickGetImageExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:80:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagehistogramâ: /tmp/pear/temp/imagick/imagick_class.c:5629:2: warning: passing argument 2 of âMagickGetImageHistogramâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:415:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagepageâ: /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 2 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 3 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 4 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 5 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageindexâ: /tmp/pear/temp/imagick/imagick_class.c:6344:2: warning: âMagickGetImageIndexâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:65) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageindexâ: /tmp/pear/temp/imagick/imagick_class.c:6369:2: warning: âMagickSetImageIndexâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:113) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagesizeâ: /tmp/pear/temp/imagick/imagick_class.c:6447:2: warning: âMagickGetImageSizeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:140) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageattributeâ: /tmp/pear/temp/imagick/imagick_class.c:6796:2: warning: âMagickSetImageAttributeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:111) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_flattenimagesâ: /tmp/pear/temp/imagick/imagick_class.c:7043:2: warning: âMagickFlattenImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:132) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_averageimagesâ: /tmp/pear/temp/imagick/imagick_class.c:8079:2: warning: âMagickAverageImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:131) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mosaicimagesâ: /tmp/pear/temp/imagick/imagick_class.c:8516:2: warning: âMagickMosaicImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:135) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getpageâ: /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 2 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 3 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 4 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 5 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getquantumdepthâ: /tmp/pear/temp/imagick/imagick_class.c:9154:2: warning: passing argument 1 of âMagickGetQuantumDepthâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:52:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getquantumrangeâ: /tmp/pear/temp/imagick/imagick_class.c:9176:2: warning: passing argument 1 of âMagickGetQuantumRangeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:53:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsamplingfactorsâ: /tmp/pear/temp/imagick/imagick_class.c:9247:2: warning: passing argument 2 of âMagickGetSamplingFactorsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:59:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsizeâ: /tmp/pear/temp/imagick/imagick_class.c:9273:2: warning: passing argument 2 of âMagickGetSizeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:86:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:9273:2: warning: passing argument 3 of âMagickGetSizeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:86:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getversionâ: /tmp/pear/temp/imagick/imagick_class.c:9299:2: warning: passing argument 1 of âMagickGetVersionâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:55:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageprogressmonitorâ: /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âstruct _php_core_globalsâ has no member named âsafe_modeâ /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âCHECKUID_CHECK_FILE_AND_DIRâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âCHECKUID_NO_ERRORSâ undeclared (first use in this function) make: *** [imagick_class.lo] Error 1 ERROR: `make' failed

    Read the article

  • OS X: Finder error -36 when using SMB shares on a Samba server bound to AD

    - by Frenchie
    We're looking at deploying SMB homes on Debian (5.0.3) for our mac clients rather than purchasing four new Xserves. We've got our test servers built and functioning properly. Windows clients behave perfectly, but we've run into an issue with OS X (10.6.x and 10.5.x). We're going this route instead of Windows file servers due to a whole bunch of other issues that arise when going that way. Specifically, when mounting a SMB share with unix extensions switched on and the remote server bound to AD, the finder cannot save files on the share, instead touching the file and then bombing out with a -36 IO error, folder creation is fine. Copying files in the terminal behaves fine and the problem seems to be limited to the finder. The issue arises (I think) as the remote UID/GID is passed across when using unix extensions. OS X uses its own winbind idmap (odsam) to work out the effective UID/GID from AD users and groups whilst we're using a rid map on the server. Consequently, there is a mismatch in ownership which the finder chooses to honour. How OS X appears to handle this is to use the remote uid and gid at the file permission level (see below) and then set an OS X acl granting the local uid/gid to have the appropriate permissions on the file. I think the finder touches the file (which the kernel allows because of the ACL) and then checks the filesystem perms and drops out with the IO error. On a Client fc-003353-d:homes2 root# ls -led test/ drwx------+ 2 135978 100513 16384 Feb 3 15:14 test/ 0: user:jfrench allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit 1: group:ARTS\domain users allow 2: group:everyone allow 3: group:owner allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 4: group:group allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 5: group:everyone allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit We've tried the following without any luck: Setting the Linux side file owner to match the OS X GID/UID Adding ACLs on the linux filesystem which grant the OS X GID/UID perms Disabling extended attributes Setting steams=no in /etc/nsmb.conf on the client We're currently running a workaround which is to just turn off unix extensions which forces the macs to just mount the share as the local user with u=rwx perms. This works for most things but is causing a few apps that expect certain perms to break in subtle ways. Worst case scenario is that we'll continue running in this way but we would like to have the unix extensions on. Regards. Relevant SMB config below: [global] workgroup = ARTS realm = *snip* security = ADS password server = *snip* unix extensions = yes panic action = /usr/share/panic-action %d idmap backend = rid:ARTS=100000-10000000 idmap uid = 100000-10000000 idmap gid = 100000-10000000 winbind enum users = Yes winbind enum groups = Yes veto files = /lost+found/aquota.*/ hide files = /desktop.ini/$RECYCLE.BIN/.*/AppData/Library/ ea support = yes store dos attributes = yes map system = no map archive = no map readonly = no

    Read the article

< Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >