Search Results

Search found 19136 results on 766 pages for 'library understanding'.

Page 516/766 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • Configuring memcached for a particular scenario

    - by pradeepchhetri
    I have a web application which queries opentsdb server(which in backend using Hbase cluster) for the datapoints of different metrics and using dygraph javascript graphing library, I am plotting those metrics. Since getting all the datapoints of past one day from opentsdb for a particular metric is itself taking nearly 2 seconds, my application which is plotting nearly 25 metrics is becoming very slow. In order to reduce this latency, I am thinking of using memcached module of php5 for caching all the queries. But I have few questions regarding memcached. Is there any way I can configure memcache to keep on updating its cache in the background by running some command line queries after particular interval of time. Is there any way I can configure memcache to always reply for a query using cache instead of first updating its cache because my application just plots datapoints for past one day. Missing out some datapoints is not that critical.

    Read the article

  • Choosing a JavaScript Asynch-Loader

    - by Prisoner ZERO
    I’ve been looking at various asynchronous resource-loaders and I’m not sure which one to use yet. Where I work we have disparate group-efforts whose class-modules may use different versions of jQuery (etc). As such, nested dependencies may differ, as well. I have no control over this, so this means I need to dynamically load resources which may use alternate versions of the same library. As such, here are my requirements: Load JavaScript and CSS resource files asynchronously. Manage dependency-order and nested-dependencies across versions. Detect if a resource is already loaded. Must allow for cross-domain loading (CDN's) (optional) Allow us to unload a resource. I’ve been looking at: Curl RequireJS JavaScriptMVC LABjs I might be able to fake these requirements myself by loading versions into properly-namespaced variables & using an array to track what is already loaded...but (hopefully) someone has already invented this. So my questions are: Which ones do you use? And why? Are there others that my satisfy my requirements fully? Which do you find most eloquent and easiest to work with? And why?

    Read the article

  • Is the addition of a duration to a date-time defined in ISO 8601?

    - by Benjamin
    I've writing a date-time library, and need to implement the addition of a duration to a date-time. If I add a 1 month duration: P1M to the 31st March 2012: 2012-03-31, does the standard define what the result is? Because the resulting date (31st April) does not exist, there are at least two options: Fall back to the last day of the resulting month. This is the approach currently taken by the ThreeTen API, the (alpha) reference implementation of JSR-310: ZonedDateTime date = ZonedDateTime.parse("2012-03-31T00:00:00Z"); Period duration = Period.parse("P1M"); System.out.println(date.plus(duration).toString()); // 2012-04-30T00:00Z Carry the extra day to the next month. This is the approach taken by the DateTime class in PHP: $date = new DateTime('2012-03-31T00:00:00Z'); $duration = new DateInterval('P1M'); echo $date->add($duration)->format('c'); // 2012-05-01T00:00:00+00:00 I'm surprised that two date-time libraries contradict on this point, so I'm wondering whether the standard defines the result of this operation?

    Read the article

  • How often do you review fundamentals?

    - by mlnyc
    So I've been out of school for a year and a half now. In school, of course we covered all the fundamentals: OS, databases, programming languages (i.e. syntax, binding rules, exception handling, recursion, etc), and fundamental algorithms. the rest were more in-depth topics on things like NLP, data mining, etc. Now, a year ago if you would have told me to write a quicksort, or reverse a singly-linked list, analyze the time complexity of this 'naive' algorithm vs it's dynamic programming counterpart, etc I would have been able to give you a decent and hopefully satisfying answer. But if you would have asked me more real world questions I might have been stumped (things like how would handle logging for an application, or security difference between GET and POST, differences between SQL Server and Oracle SQL, anything I list on my resume as currently working with [jQuery questions, ColdFusion questions, ...] etc) Now, I feel things are the opposite. I haven't wrote my own sort since graduating, and I don't really have to worry much about theoretical things that do not naturally fall into problems I am trying to solve. For example, I might give you some great SQL solutions using an analytical function that I would have otherwise been stumped on or write a cool web application using angular or something but ask me to write an algo for insertAfter(Element* elem) and I might not be able to do it in a reasonable time frame. I guess my question here to the experienced programmers is how do you balance the need to both learn and experiment with new technologies (fun!), working on personal projects (also fun!) working and solving real world problems in a timeboxed environment (so I might reach out to a library that does what I want rather than re-invent the wheel so that I can focus on the problem I am trying to solve) (work, basically), and refreshing on old theoretical material which is still valid for interviews and such (can be a drag)? Do you review older material (such as famous algorithms, dynamic programming, Big-O analysis, locking implementations) regularly or just when you need it? How much time do you dedicate to both in your 'deliberate practice' and do you have a certain to-do list of topics that you want to work on?

    Read the article

  • Open Directory authenticated bind succeeds, but creates incomplete record

    - by Jay Thompson
    I have about a dozen Macs running 10.6.7 or 10.6.8, which are all failing to bind properly to my new 10.7.4 Server OD. I can bind them just fine via Directory Utility or dsconfigldap, and it reports success. However, when I look at the record, it is failing to write the MAC address. Even if I manually update the record with the MAC address, MCX doesn't do anything and clients can't log in to OD accounts. All of the affected clients have hundreds of lines in the /Library/Logs/DirectoryService.error.log like so: 2012-09-15 22:23:18 EDT - T[0x00007FFF70292CC0] - GetMACAddress returned 0x *** bad control string *** 8x I do know that all of these clients were previously managed with the Guest computer account, and I also know that they were all imaged with a DeployStudio image when they were purchased. I've tried dscacheutil -flushcache, but after that I'm drawing a blank. Google has a few hits, but nothing very helpful. Re-imaging would be ideal but probably isn't going to happen. Anyone come across this before?

    Read the article

  • Thunderbird 3.0 refuses to start on Mac OS X?

    - by jtimberman
    I just downloaded Thunderbird 3.0 on my Macbook Pro running Leopard, and installed it in /Applications. When I attempt to start it, the icon opens on the Dock as normal, but I get the following dialog. I don't have a ~/Library/Application Support/Thunderbird directory at all, let alone a .parentlock file. While I didn't expect it to help, I did reboot my system. And sign out and back in. And close all programs besides Thunderbird. Earlier versions of Thunderbird have worked just fine.

    Read the article

  • Searching Netapp Network Share in Windows 7

    - by user121270
    Windows 7 famously does not do what its predecessor, Windows XP, did very well, index and search network drives! Sometimes, the logic of MS isd absolutely baffling. That siad, I am trying to find some solution to the issue, which is made more complicated by the fact that we are using a Netapp FAS 2020 as a CIFS fileserver. I know some of the solutions to the Windows 7 search index issue revolve around having a Search Service installed on a Windows 2008 server and then adding that server sahre to the library on the Windows 7 workstation. Is it possible to accomplish this in any way with a CIFS share on a Netapp filer?

    Read the article

  • Time Synch Architecture in Windows Domain Environment

    - by Param
    I just read the following article -- "In a domain, time synchronization takes place when Windows Time Service turns on during system startup and periodically while the system is running." ( http://technet.microsoft.com/en-us/library/cc779145%28v=ws.10%29.aspx ) From the above article i get to know that the first sync it take as soon as i start my system, but after that in how many minutes or second or in how many periodic interval my windows client ( Window XP, window7 or window server 2008 member ) synch with my Domain controller (PDC emulator )??? Do you have any idea, and how should i verify my synch time interval? My Domain Controller is Window server 2008 R2 Standard

    Read the article

  • Capture the active window to jpg using C# ?

    - by user29004
    Hello, I am creating a Windows Application that capture the all users session details. I want to track that how many users are connected with my machine using RDP session. Want to log all their activity with screen shot. So I would like to know how can I capture the active screen. I tried 'RUNAS' but its show me current screen shot where I run. I am able to use user's application data but not get Actual screen. I also tried LogonUser, CreateProcessAsUser but not getting actual point. So please let me know how can I do it ? I used 'cassia' google library to list all the current logged user. Thanks Laxmilal

    Read the article

  • Should static parameters in an API be part of each method?

    - by jschoen
    I am currently creating a library that is a wrapper for an online API. The obvious end goal is to make it as easy for others to use as possible. As such I am trying to determine the best approach when it comes to common parameters for the API. In my current situation there are 3 (consumer key, consumer secret, and and authorization token). They are essentially needed in every API call. My question is should I make these 3 parameters required for each method or is there a better way. I see my current options as being: Place the parameters in each method call public ApiObject callMethod(String consumerKey, String consumerSecret, String token, ...) This one seems reasonable, but seems awfully repetitive to me. Create a singleton class that the user must initialize before calling any api methods. This seems wrong, and would essentially limit them to accessing one account at a time via the API (which may be reasonable, I dunno). Make them place them in a properties file in their project. That way I can load the properties that way and store them. This seems similar to the singleton to me, but they would not have to explicitly call something to initialize these values. Is there another option I am not seeing, or a more common practice in this situation that I should be following?

    Read the article

  • Stylecop 4.7.38.0 has been released

    - by TATWORTH
    Stylecop  4.7.38.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Move Registry functions into common Utils class. Styling fixes.Dictionary updatesStyling fixes.Update Styling.Styling fixes.Update docs.Spelling fixes in our own source.Add solution specific spellings to our own Settings.StyleCopDeploy more up to date spelling checkers and dictionaries.Update our own StyleCop and dictionaries for analyzing our own build.Update the custom dictionaries.Update the spellchecker to work for 32 or 64 bit processes.Update latex parser.Update the latex parser for $$...$$Fix the latex parser to allow any char between $ and $Add a new tab to the settings editor to add/remove spelling words. Ignore words starting and ending with a '$'. Add support for our own recognized words in the settings file. If the spelling library can't load then dont analyse the spellings and fail gracefully.Fix for 7398. Insert the correct type-name in the example for summary.Fix for 7396. Added new tests. All doc elements to end with <c> elements and not be reported for lack of white-space or too short.

    Read the article

  • MacBook repeatedly disconnects from WiFi

    - by redwall_hp
    I have an early 2008-model MacBook (2.4GHz). The WiFi router I have at home is a Linksys WRT54GX2 that I have had for a few years. My MacBook has recently started disconnecting from the router every few minutes, which is rather annoying. I can reconnect again without having to restart the router or anything, as it seems that the MacBook is just dropping the connection. I have tried changing the channel on the router, and upgrading the laptop from Leopard to Snow Leopard made no difference either. I'm only about six feet from the linksys device, so distance isn't an issue. This only happens with the Linksys router, while I can use the local library's open network without issue. The problem also seemingly becomes more pronounced after midnight. Any ideas as to what the problem could be?

    Read the article

  • send mail via smtp on debian 6?

    - by acidzombie24
    i followed this tutorial http://library.linode.com/email/exim/send-only-mta-debian-6-squeeze?format=print When i got to the end, i sent the email to [email protected]. I didnt receive it. I didnt get spam either, i got nothing. I also didnt get an error message on the console. How do i properly sent up a smtp server and send an email from it? I'll not i am testing this on my VM on my local computer. My ISP doesnt block any traffic whatsoever (which is one reason why i use them) so... what can i do? i also tried this tutorial

    Read the article

  • How can I get GrowlTunes to Launch whenever iTunes Launches?

    - by Orion751
    I think I would be able to do this by modifying iTunes' launch services. Any idea how to go about that? Would editing its info.plist file in a manner similar to below do what I'm looking for? <keyLSOpenApplication</key <string?</string EDIT: Would http://developer.apple.com/library/mac/#documentation/Carbon/Reference/LaunchServicesReference/Reference/reference.html%23//apple_ref/c/func/LSOpenApplication provide any hints? EDIT2: Last.fm's official Mac Scrobbler (http://www.last.fm/download) is a perfect example of the functionality that I'm looking for.

    Read the article

  • I still can't figure out how to program!

    - by Mark K.
    Please help! I've read lots of programming books for various languages, Java, Python, C, etc. I understand and know all of the basics of the languages and I understand algorithms and data structures. (Equivilant of say 2 years of CompSci classes) BUT, I still can't figure how to write a program that does anything useful. All of the programming books show you how to write the language, but NOT how to use it! The programming examples are all very basic like build a card catalog for a library or a simple game or use algorithms etc... They dont't show you how to develop complex programs that actually do anything useful! I've looked a open-source programs on sourceforge, but they don't make much sense to me. There are hundreds of files in each program & thousands of lines of code. But how do I learn how to do this? There's nothing in any book I can buy on Amazon that will give me the tools to write any of these programs. How do you go from reading Intro to Java or Programming Python, or C Programming Language, etc.. to actually being able to say, I have an idea for X Program.. this is how I go about developing it? It seems like there is so much more involved in writing a program than you can learn in a book or from a class. I feel like there is someth Can anyone put me on the right track?

    Read the article

  • Change MacOS X guest screen resolution for VirtualBox

    - by Pymoo
    I have tried all alternatives and resources that I found on internet to achieve to change screen resolution in my MacOS X guest. I have the latest VirtualBox version (4.1.22) and I have MacOS X 10.6.3 Snow Leopard running in a vm guest. Some solutions that don't work for me are: Tuning virtual machine settings: Adding and in the .vbox file, or running these two commands: vboxmanage setextradata "MAC OS X" "CustomVideoMode1" "1360x768x32" vboxmanage setextradata "MAC OS X" "GUI/CustomVideoMode1" "1360x768x32" Editing Guest OS boot configuration: Modify /Library/Preferences/SystemConfiguration/com.apple.boot.plist with these lines: <key>Kernel Flags</key> <string>"Graphics Mode"="1360x768x32"</string> <key>Graphics Mode</key> <string>1360x768x32</string> Any other suggestion, something that I was missing. Thanks in advance,

    Read the article

  • Spotlight on Oracle Social Relationship Management. Social Enable Your Enterprise with Oracle SRM.

    - by Pat Ma
    Facebook is now the most popular site on the Internet. People are tweeting more than they send email. Because there are so many people on social media, companies and brands want to be there too. They want to be able to listen to social chatter, engage with customers on social, create great-looking Facebook pages, and roll out social-collaborative work environments within their organization. This is where Oracle Social Relationship Management (SRM) comes in. Oracle SRM is a product that allows companies to manage their presence with prospects and customers on social channels. Let's talk about two popular use cases with Oracle SRM. Easy Publishing - Companies now have an average of 178 social media accounts - with every product or geography or employee group creating their own social media channel. For example, if you work at an international hotel chain with every single hotel creating their own Facebook page for their location, that chain can have well over 1,000 social media accounts. Managing these channels is a mess - with logging in and out of every account, making sure that all accounts are on brand, and preventing rogue posts from destroying the brand. This is where Oracle SRM comes in. With Oracle Social Relationship Management, you can log into one window and post messages to all 1,000+ social channels at once. You can set up approval flows and have each account generate their own content but that content must be approved before publishing. The benefits of this are easy social media publishing, brand consistency across all channels, and protection of your brand from inappropriate posts. Monitoring and Listening - People are writing and talking about your company right now on social media. 75% of social media users have written a negative post about a brand after a poor customer service experience. Think about all the negative posts you see in your Facebook news feed about delayed flights or being on hold for 45 minutes. There is so much social chatter going on around your brand that it's almost impossible to keep up or comprehend what's going on. That's where Oracle SRM comes in. With Social Relationship Management, a company can monitor and listen to what people are saying about them on social channels. They can drill down into individual posts or get a high level view of trends and mentions. The benefits of this are comprehending what's being said about your brand and its competitors, understanding customers and their intent, and responding to negative posts before they become a PR crisis. Oracle SRM is part of Oracle Cloud. The benefits of cloud deployment for customers are faster deployments, less maintenance, and lower cost of ownership versus on-premise deployments. Oracle SRM also fits into Oracle's vision to social enable your enterprise. With Oracle SRM, social media is not just a marketing channel. Social media is also mechanism for sales, customer support, recruiting, and employee collaboration. For more information about how Oracle SRM can social enable your enterprise, please visit oracle.com/social. For more information about Oracle Cloud, please visit cloud.oracle.com.

    Read the article

  • Encapsulating code in F# (Part 2)

    - by MarkPearl
    In part one of this series I showed an example of encapsulation within a local definition. This is useful to know so that you are aware of the scope of value holders etc. but what I am more interested in is encapsulation with regards to generating useful F# code libraries in .Net, this is done by using Namespaces and Modules. Lets have a look at some C# code first… using System; namespace EncapsulationNS { public class EncapsulationCLS { public static void TestMethod() { Console.WriteLine("Hello"); } } } Pretty simple stuff… now the F# equivalent…. namespace EncapsulationNS module EncapsulationMDL = let TestFunction = System.Console.WriteLine("Hello") ()   Even easier… lets look at some specifics about F# namespaces… Namespaces are open. meaning you can have multiple source files and assemblies can contribute to the same namespace. So, Namespaces are a great way to group modules together, so the question needs to be asked, what role do modules play. For me, the F# module is in many ways similar to the vb6 days of modules. In vb6 modules were separate files and simply allowed us to group certain methods together. I find it easier to visualize F# modules this way than to compare them to the C# classes. However that being said one is not restricted to one module per file – there is flexibility to have multiple modules in one code file however with my limited F# experience I would still recommend using the file as the standard level of separating modules as it is very easy to then find your way around a solution. An important note about interop with F# and other .Net languages. I wrote a blog post a while back about a very basic F# to C# interop. If I were to reference an F# library in a C# project (for instance ‘TestFunction’), in C# it would show this method as a static method call, meaning I would not have to instantiate an instance of the module.

    Read the article

  • Draw "vision cone" / targetting element onto game world

    - by gkimsey
    I'm wanting to indicate various things using a "pie slice" sort of shape as below. Similar to vision cones in stealth game minimaps, or targetting indicators in RTS type games for frontal area attacks. Something generic enough to be used for both would be ideal. I need to be able to procedurally (and efficiently) change things like the slice width and length, color, transparency, position in the world, etc. For my particular situation, there's no concern with elevation, funky terrain, or really any third axis at all as far as this element is concerned. I have two first inclinations on how to accomplish this: 1) Manually generate the vertices for a main triangle, (possibly two, superimposed to get the border effect), a handful more to approximate the arc at the end, and roll it into a mesh. 2) Use some sort of 2D drawing library to create a circle and mask it off at the right angles, render to texture, and use that. For reference, I have some experience with Ogre3D, but I'm not attached to it as this is a mostly academic pursuit at the moment. Other technologies that might be better at accomplishing this are more than welcome. Finally, I'm kind of curious about how to do a "flashlight" or similar 3D effect that could produce the same result, but on all surfaces in the lit area.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Does the .NET Framework need to be reoptimized after upgrading to a new CPU microarchitecture?

    - by Louis
    I believe that the .NET Framework will optimize certain binaries targeting features specific to the machine it's installed on. After changing the CPU from an Intel Nehalem to a Haswell chip, should the optimization be run again manually? If so, what is the process for that? Between generations here are some notable additions: Westmere: AES instruction set Sandy Bridge: Advanced Vector Extensions Ivy Bridge: RdRand (hardware random number generator), F16C (16-bit Floating-point conversion instructions) Haswell: Haswell New Instructions (includes Advanced Vector Extensions 2 (AVX2), gather, BMI1, BMI2, ABM and FMA3 support) So my, albeit naive, thought process was that the optimizations could take advantage of these in general cases. For example, perhaps calls to the Random library could utilize the hardware-RNG on Ivy Bridge and later models.

    Read the article

  • REST API wrapper - class design for 'lite' object responses

    - by sasfrog
    I am writing a class library to serve as a managed .NET wrapper over a REST API. I'm very new to OOP, and this task is an ideal opportunity for me to learn some OOP concepts in a real-life situation that makes sense to me. Some of the key resources/objects that the API returns are returned with different levels of detail depending on whether the request is for a single instance, a list, or part of a "search all resources" response. This is obviously a good design for the REST API itself, so that full objects aren't returned (thus increasing the size of the response and therefore the time taken to respond) unless they're needed. So, to be clear: .../car/1234.json returns the full Car object for 1234, all its properties like colour, make, model, year, engine_size, etc. Let's call this full. .../cars.json returns a list of Car objects, but only with a subset of the properties returned by .../car/1234.json. Let's call this lite. ...search.json returns, among other things, a list of car objects, but with minimal properties (only ID, make and model). Let's call this lite-lite. I want to know what the pros and cons of each of the following possible designs are, and whether there is a better design that I haven't covered: Create a Car class that models the lite-lite properties, and then have each of the more detailed responses inherit and extend this class. Create separate CarFull, CarLite and CarLiteLite classes corresponding to each of the responses. Create a single Car class that contains (nullable?) properties for the full response, and create constructors for each of the responses which populate it to the extent possible (and maybe include a property that returns the response type from which the instance was created). I expect among other things there will be use cases for consumers of the wrapper where they will want to iterate through lists of Cars, regardless of which response type they were created from, such that the three response types can contribute to the same list. Happy to be pointed to good resources on this sort of thing, and/or even told the name of the concept I'm describing so I can better target my research.

    Read the article

  • CodePlex Daily Summary for Saturday, August 23, 2014

    CodePlex Daily Summary for Saturday, August 23, 2014Popular ReleasesDIII Save Editor: ROS Alpha 1.2.14.100: initial Ros alpha release please report all bugsSEToolbox: SEToolbox 01.044.014 Release 2: Fixed Ship name not saving. Fixed broken cubes view Bug. Fixed cast VRage.MyFixedPoint error when opening games with Meteors. Added checkbox when Importing 3d model to Export ship, to fill it as solid.CS-Script Source: Release v3.8.5: Fixed problem with the warnings getting hidden in case of the successful compilation cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationOutlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.MSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksExchange Database Recovery With and Without Log Files is Possible: Exchange Recovery Application: This Exchange Recovery Software comes with free trial edition which helps users to inspect the working capability of the recovery process. Download free demo version and repair inaccessible mailboxes from EDB file without any obstructions.Linq 4 Javascript: Version 2.4: Minor Changes Made Added Count() and Count(with where clause) Distinct will now use a dictionary instead of a custom dictionary object Organize the unit tests. The variable names will actually make sense and won't be 2 letters. SelectMany will now use the queryable logic.Office / SharePoint 2013 Continuous Integration with TFS 2012: 1.1.0.1: Fixed the following issues in TfsDropDrownloader: Updated to make it work with VS 2013 (including VS 2013 updates) in addition to VS2012. Extend the timeout of downloading drops from 100 seconds to 1 hour. Added more trouble shooting information in the output.CRM Solution CommandLine Helper: CRM Solution Cmd Helper 1.0.0.4: Includes : - Bug fix = Export argument validation : check directory path existence (thanks mszlapa)Office To PDF: OfficeToPDF 1.4: Adds support for additional file types: * mpp (requires MS Project >= 2010) * vsdx, vsdm (requires MS Visio >= 2013) * csv * odt, odc, odp * pot, potm, potx Improves stability and clean removal of COM objects. Adds new flags: * /verbose - to be more verbose when running * /markup - to allow document markup in the PDF when converting Word documents * /excel_max_rows - adds a maximum limit on the number of rows a worksheet can contain when converting Excel documents * /pdfa - crea...MongoRepository: MongoRepository 1.6.6: Installing using NuGet (recommended)MongoRepository is now a NuGet package for your convenience. Step-by-step instructions can be found in Installing MongoRepository using NuGet Installing using BinariesYou can also choose to download the binaries instead of using NuGet. There are 2 downloads: mongorepository_full.x.x.x contains all binaries required (MongoRepository and the 10gen C# driver) mongorepository.x.x.x contains only the MongoRepository binary Make sure you reference MongoReposit...Cryptography Enumerations JavaScript Shell: Cryptography Enumerations JavaScript Shell 1.0.0: First ReleaseCMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...New Projects3D Projectile: A 3D Projectile program showing the motion of a ballASP.NET Web Application Starter Kit: This project template is an ASP.NET solution skeleton for a typical web application or single-page application (SPA).Behaving - Behaviour Tree for C#: Behaviour is a Behaviour Tree implementation in C#.Kinect Stream Saver Application _SDK 2: This application is developed based on a sample called "ColorBasics-D2D C++" developed by Microsoft corporation. (Compatible with SDK 2: K4W v2 Dev Preview)MVC Bootstrap Paginator: The MVC Bootstrap Paginator is lightweight and easy to use. It's works out of the box and requires minimal configuration.NuGet Reference Switcher: NuGet Reference Switcher is a Visual Studio extension which can be used to automatically switch NuGet DLL references to project references and vice-versa. QKit: A WP8.1 library that provides various controls and classes that will help developers quickly and easily augment their apps to behave more like native apps.SharePoint 2013 Document Icon Linker: Links the document icon in library views to the document.SharePoint Autocomplete People Search: SharePoint People SearchWADM: WADM

    Read the article

  • Is the Leptonica implementation of 'Modified Median Cut' not using the median at all?

    - by TheCodeJunkie
    I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm. I've been reading the code of the C implementation in Leptonica library and came across something I thought was a bit odd. Now I want to stress that I am far from an expert in this area, not am I a math-head, so I am predicting that this all comes down to me not understanding all of it and not that the implementation of the algorithm is wrong at all. The algorithm states that the vbox should be split along the lagest axis and that it should be split using the following logic The largest axis is divided by locating the bin with the median pixel (by population), selecting the longer side, and dividing in the center of that side. We could have simply put the bin with the median pixel in the shorter side, but in the early stages of subdivision, this tends to put low density clusters (that are not considered in the subdivision) in the same vbox as part of a high density cluster that will outvote it in median vbox color, even with future median-based subdivisions. The algorithm used here is particularly important in early subdivisions, and 3is useful for giving visible but low population color clusters their own vbox. This has little effect on the subdivision of high density clusters, which ultimately will have roughly equal population in their vboxes. For the sake of the argument, let's assume that we have a vbox that we are in the process of splitting and that the red axis is the largest. In the Leptonica algorithm, on line 01297, the code appears to do the following Iterate over all the possible green and blue variations of the red color For each iteration it adds to the total number of pixels (population) it's found along the red axis For each red color it sum up the population of the current red and the previous ones, thus storing an accumulated value, for each red note: when I say 'red' I mean each point along the axis that is covered by the iteration, the actual color may not be red but contains a certain amount of red So for the sake of illustration, assume we have 9 "bins" along the red axis and that they have the following populations 4 8 20 16 1 9 12 8 8 After the iteration of all red bins, the partialsum array will contain the following count for the bins mentioned above 4 12 32 48 49 58 70 78 86 And total would have a value of 86 Once that's done it's time to perform the actual median cut and for the red axis this is performed on line 01346 It iterates over bins and check they accumulated sum. And here's the part that throws me of from the description of the algorithm. It looks for the first bin that has a value that is greater than total/2 Wouldn't total/2 mean that it is looking for a bin that has a value that is greater than the average value and not the median ? The median for the above bins would be 49 The use of 43 or 49 could potentially have a huge impact on how the boxes are split, even though the algorithm then proceeds by moving to the center of the larger side of where the matched value was.. Another thing that puzzles me a bit is that the paper specified that the bin with the median value should be located, but does not mention how to proceed if there are an even number of bins.. the median would be the result of (a+b)/2 and it's not guaranteed that any of the bins contains that population count. So this is what makes me thing that there are some approximations going on that are negligible because of how the split actually takes part at the center of the larger side of the selected bin. Sorry if it got a bit long winded, but I wanted to be as thoroughas I could because it's been driving me nuts for a couple of days now ;)

    Read the article

  • share code between check and process methods

    - by undu
    My job is to refactor an old library for GIS vector data processing. The main class encapsulates a collection of building outlines, and offers different methods for checking data consistency. Those checking functions have an optional parameter that allows to perform some process. For instance: std::vector<Point> checkIntersections(int process_mode = 0); This method tests if some building outlines are intersecting, and return the intersection points. But if you pass a non null argument, the method will modify the outlines to remove the intersection. I think it's pretty bad (at call site, a reader not familiar with the code base will assume that a method called checkSomething only performs a check and doesn't modifiy data) and I want to change this. I also want to avoid code duplication as check and process methods are mostly similar. So I was thinking to something like this: // a private worker std::vector<Point> workerIntersections(int process_mode = 0) { // it's the equivalent of the current checkIntersections, it may perform // a process depending on process_mode } // public interfaces for check and process std::vector<Point> checkIntersections() /* const */ { workerIntersections(0); } std::vector<Point> processIntersections(int process_mode /*I have different process modes*/) { workerIntersections(process_mode); } But that forces me to break const correctness as workerIntersections is a non-const method. How can I separate check and process, avoiding code duplication and keeping const-correctness?

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >