Search Results

Search found 73305 results on 2933 pages for 'copy run start'.

Page 443/2933 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Bash script runs fine, but not in cron

    - by radiotech
    I have a script that's supposed to record a shoutcast stream for an hour, convert it to mp3, and then save it. The script runs correctly when I run it from the terminal, but I can't seem to get it to run in cron (where it should run every hour at the top of the hour). Here's the line in crontab: 0 * * * * /medialib/tech/bin/recordstream 2>&1 >> /medialib/tech/cron.log and here's the script: #!/bin/bash name="$(date +%s)" mp3_name=$name.mp3 wav_name=$name.wav timeout -sHUP 60m vlc -I dummy --sout "#transcode{channels=2}:std{access=file,mux=wav,dst=/medialib/stream_backup/wav/$wav_name" /medialib/tech/lib/listen.m3u lame --mp3input /medialib/stream_backup/wav/$wav_name /medialib/stream_backup/$mp3_name rm /medialib/stream_backup/wav/$wav_name Thank you! EDIT: Contents of cron.log (This text has been in the log file since it was transferred from an old server where it was working). VLC media player 2.0.8 Twoflower Command Line Interface initialized. Type `help' for help. > Shutting down. VLC media player 2.0.8 Twoflower Command Line Interface initialized. Type `help' for help. > Shutting down.

    Read the article

  • What functionality of an iPad can I use with Ubuntu?

    - by Andrew Ferrier
    What functionality of an iPad can I use with Ubuntu? I'm thinking about buying an iPad, but I only have Ubuntu PCs these days (no Windows, no Mac), and I'm nervous that it may be too reliant on the existence of iTunes. I'm less concerned about getting media onto and off it (and I have read that I can do this with libimobiledevice), but will I be able to activate it with iTunes? Does it need to be USB synced on a regular basis, or can I do most anything I want through the cloud (i.e. over wifi)? Edit: In answer to jrgifford's question, in theory, I'd like to do everything I would want to do with it were I using Windows/Mac. But more specifically, I think I'd want to: "Activate" it, if that's necessary Copy music / video onto it Install apps How much of this can I do without a Windows/Mac PC? Is it necessary to activate it? I don't think I'm that interested in backing it up (what would I back up?), but then again, maybe I'm confused as to how easy it is to get data on and off. Can I get a regular SFTP app for the iPad to copy data to my Ubuntu machine over wi-fi, for example?

    Read the article

  • software architecture (OO design) refresher course

    - by PeterT
    I am lead developer and team lead in a small RAD team. Deadlines are tight and we have to release often, which we do, and this is what keep the business happy. While we (the development team) are trying to maintain the quality of the code (clean and short methods), I can't help but notice that the overall quality of the OO design&architecture is getting worse over the time - the library we are working on is gradually reducing itself to a "bag of functions". Well, we try to use the design patterns, but since we don't really have much time for a design as such we are mostly using the creational ones. I have read Code Complete / Design Patterns (GOF & enterprise) / Progmatic Programmer / and many books from Effective XXX series. Should I re-read them again as I have read them a long time ago and forgotten quite a lot, or there are other / better OO design / software architeture books been published since then which I should definitely read? Any ideas, recommendations on how can I get the situation under control and start improving the architecture. The way I see it - I will start improving the architectural / design quality of software components I am working on and then will start helping other team members once I find what is working for me.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • WordPress is now nicely supported on SQL Server (and SQL Azure for that matter)

    - by Eric Nelson
    WordPress is enormously popular for blogs and full websites thanks to an awesome eco system which has built up around it, the simplicity (relatively) of getting it up and running plus the flexibility to “bend it” in all sorts of directions. When I say bend, check out the following which are all WordPress sites My “back up blog” http://iupdateable.wordpress.com/  My groups “odd site” :) http://ubelly.com My favourite “cheap games” site http://www.frugalgaming.co.uk/  WordPress users typically run their sites on Linux and MySQL, although PHP (the language in which WordPress is written) can be happily run on Windows. Both fine technologies in their own right, but for me (and probably a fair few others) I would love to use WordPress but with the technologies I know best (aka Windows, IIS and SQL Server). However, that has proven to be actually rather tricky in practice to get working – until now. Earlier last month OmniTI released a patch for WordPress which provides SQL Server and SQL Azure support.  In parallel with that some fine folks inside Microsoft have also created http://wordpress.visitmix.com which contains information about running WordPress on the Microsoft platform with a particular focus on SQL Server and SQL Azure.  Top stuff! To run WordPress with SQL Server: Download and Install the WordPress on SQL Server Distro/Patch And then you will quite likely need to migrate: Check out how to Migrate to Windows and SQL Server by Zach Owens who is moving his blog to Windows and SQL Server Enjoy Related Links Running PHP on IIS on Windows http://php.iis.net/  If PHP is not your thing, then the following Blog engines are .NET based BlogEngine http://www.dotnetblogengine.net/ DasBlog http://www.dasblog.info/ Subtext http://subtextproject.com/ (which happens to power http://geekswithblogs.net where my main blog is http://geekswithblogs.net/iupdateable)

    Read the article

  • Change the Integrated Weblogic Port number

    - by pavan.pvj
    There came a situation where I wanted to work with two JDevelopers simultaneously and start two different applications in two JDEVs. (Both of them have to in separate installation location, else it will create a problem because of system directory).Now, when we want to start WLS in JDEV, only the first one will be started and the other one fails with an exception of port conflict. Until few days back, $1million dollar question was how to change the integrated WLS port number?So, heres the answer after some R&D. In the view menu, click on "Application Server Navigator". Right click on Integrated Weblogic server.1) If it is the first time that you are trying to start the server, then there is a menu "Create Default Domain". If you click on this, a window will be displayed where it asks for the preferred port number. Change it here.2) If the domain is already created, then click on Properties and change the preferred port number.Again, if you want to change the port before starting JDEV from the file system, then goto $JDEV_USER_HOME/systemxxx/o.j2ee and open the file adrs-instances.xml and change the http-port in the startup-preferences:<hash n="startup-preferences">   <value n="http-port" v="7111"/></hash>Note 1: adrs-instances.xml will be created ONLY after you create the default domain.Note 2: systemxxx - refers to system.<JDEV version> like system.11.1.1.3.56.59 for PS2.Note 3: $JDEV_USER_HOME - in windows - would be C:\Documents and Settings\[user_name]\Application Data\JDeveloper"Now, you can run multiple Integrated WLS simultaneously. But please be aware that running more than one WLS server will degrade system performance.

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

  • CodePlex Daily Summary for Wednesday, August 22, 2012

    CodePlex Daily Summary for Wednesday, August 22, 2012Popular ReleasesLINQ to Twitter: LINQ to Twitter Beta v2.0.29: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, Client Profile, and Windows 8. 100% Twitter API coverage. LINQ to Twitter Samples contains example code for using LINQ to Twitter with various .NET technologies. Downloadable source code also has C# samples in the LinqToTwitterDemo project and VB samples in the LinqToTwitterDemoVB project.OutlookGoogleSync: OutlookGoogleSync v1.0.5: 1.5: - changed Outlook Primary Interop Assembly to V11 (Ofice 2003) to support older Office versions - more info about start/end/needed time - got rid of app.config - changed double click to single click on tray iconZXing.Net: ZXing.Net 0.8.0.0: sync with rev. 2393 of the java version improved API, direct support for multiple barcode decoding, wrapper for barcode generating many other improvements and fixes encoder and decoder command line clients demo client for emguCV dev documentation startedScintillaNET: ScintillaNET 2.5.1: This release has been built from the 2.5 branch. Issues closed: Issue # Title 32524 32524 32550 32550 32552 32552 25148 25148 32449 32449 32551 32551 32711 32711 MFCMAPI: August 2012 Release: Build: 15.0.0.1035 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeDocument.Editor: 2013.2: Whats new for Document.Editor 2013.2: New save as Html document Improved Traslate support Minor Bug Fix's, improvements and speed upsPulse: Pulse Beta 5: Whats new in this release? Well to start with we now have Wallbase.cc Authentication! so you can access favorites or NSFW. This version requires .NET 4.0, you probably already have it, but if you don't it's a free and easy download from Microsoft. Pulse can bet set to start on Windows startup now too. The Wallpaper setter has settings now, so you can change the background color of the desktop and the Picture Position (Tile/Center/Fill/etc...) I've switched to Windows Forms instead of WPF...Metro Paint: Metro Paint: Download it now , don't forget to give feedback to me at maitreyavyas@live.com or at my facebook page fb.com/maitreyavyas , Hope you enjoy it.MiniTwitter: 1.80: MiniTwitter 1.80 ???? ?? .NET Framework 4.5 ?????? ?? .NET Framework 4.5 ????????????? "&" ??????????????????? ???????????????????????? 2 ??????????? ReTweet ?????????????????、In reply to ?????????????? URL ???????????? ??????????????????????????????Droid Explorer: Droid Explorer 0.8.8.6 Beta: Device images are now pulled from DroidExplorer Cloud Service refined some issues with the usage statistics Added a method to get the first available value from a list of property names DroidExplorer.Configuration no longer depends on DroidExplorer.Core.UI (it is actually the other way now) fix to the bootstraper to only try to delete the SDK if it is a "local" sdk, not an existing. no longer support the "local" sdk, you must now select an existing SDK checks for sdk if it was ins...Path Copy Copy: 11.0.1: Bugfix release that corrects the following issue: 11365 If you are using Path Copy Copy in a network environment and use the UNC path commands, it is recommended that you upgrade to this version.ExtAspNet: ExtAspNet v3.1.9.1: +2012-08-18 v3.1.9 -??other/addtab.aspx???JS???BoundField??Tooltip???(Dennis_Liu)。 +??Window?GetShowReference???????????????(︶????、????、???、??~)。 -?????JavaScript?????,??????HTML????????。 -??HtmlNodeBuilder????????????????JavaScript??。 -??????WindowField、LinkButton、HyperLink????????????????????????????。 -???????????grid/griddynamiccolumns2.aspx(?????)。 -?????Type??Reset?????,??????????????????(e??)。 -?????????????????????。 -?????????int,short,double??????????(???)。 +?Window????Ge...AcDown????? - AcDown Downloader Framework: AcDown????? v4.0.1: ?? ●AcDown??????????、??、??????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ...Fluent Validation for .NET: 3.4: Changes since 3.3: Make ValidationResut.IsValid virtual Add private no-arg ctor to ValidationFailure to help with serialization Add Turkish error messages Work-around for reflection bug in .NET 4.5 that caused VerificationExceptions Assemblies are now unsigned to ease with versioning/upgrades (especially where other frameworks depend on FV) (Note if you need signed assemblies then you can use the following NuGet packages: FluentValidation-signed, FluentValidation.MVC3-signed, FluentV...DotNetNuke® Feedback: 06.02.01: Official Release - 17th August 2012 Please look at the Release Notes file included in the module packages or available on this page as a separate download for a listing of the bug fixes and enhancements found in this version. NOTE: Feedback v 06.02.00 REQUIRES a minimum DotNetNuke framework version of 06.02.00 as well as ASP.Net 3.5 SP1 and MS SQL Server 2005 or 2008 (Express or standard versions). This release brings some enhancements to the module as well as fixing all known bugs. Bug Fi...AssaultCube Reloaded: 2.5.3 Unnamed Fixed: If you are using deltas, download 2.5.2 first, then overwrite with the delta packages. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.6.1: Bug Fix release Bug Fixes Better support for transparent images IsFrozen respected if not bound to corrected deadlock stateWPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.7: Version: 2.5.0.7 (Milestone 7): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Add CollectionHelper.GetNextElementOrDefault method. InfoMan: Support creating a new email and saving it in the Send b...myCollections: Version 2.2.3.0: New in this version : Added setup package. Added Amazon Spain for Apps, Books, Games, Movie, Music, Nds and Tvshow. Added TVDB Spain for Tvshow. Added TMDB Spain for Movies. Added Auto rename files from title. Added more filters when adding files (vob,mpls,ifo...) Improve Books author and Music Artist Credits. Rewrite find duplicates for better performance. You can now add Custom link to items. You can now add type directly from the type list using right mouse button. Bug ...Player Framework by Microsoft: Player Framework for Windows 8 Preview 5 (Refresh): Support for Windows 8 and Visual Studio RTM Support for Smooth Streaming SDK beta 2 Support for live playback New bitrate meter and SD/HD indicators Auto smooth streaming track restriction for snapped mode to conserve bandwidth New "Go Live" button and SeekToLive API Support for offset start times Support for Live position unique from end time Support for multiple audio streams (smooth and progressive content) Improved intellisense in JS version NEW TO PREVIEW 5 REFRESH:Req...New Projects.NET Winforms Gantt Chart Control: Gantt Chart Control allows user to quickly create charts for prototyping or simple use cases in bigger projects.BrowseByURL: Tool that will select the right browser for displaying your URLSdummy2: This is a test projectFit Protocol Library: A library to parse and edit FIT files, used by fitness devices, such as the Garmin series of fitness GPS devices.Guild Wars 2 Build and Rotation Generator: The Guild Wars 2 Build and Rotation Generator is an audacious attempt to make an automatic character build generator in C# using the AForge genetic libraries,ISMOT - Kinect Gesture Library: A Cool Gesture LibraryKaqaz: Kaqaz is a simple weblog engine based on Xoqal framework. You can find Xoqal project link at the related projects.OAuth Lite: An easy to use library to simplify access to web resources which use OAuth 2.0 for authentication.OutlookGoogleSync: A small tool to keep the Google calendar in sync with the Outlook calendar (one way: Outlook -> Google). Doesn't need admin rights and works behind a proxy.pboa1: ????Publishing Point: Hello, my name is Leonidas Fengos and i am developer. This is a tool of publishing point about media player and media element. Please you could use and try it!!Rubik Database Tools: coming soon...Saturn Kinect: Saturn Kinect is a Kinect Interaction Library that contains classes for managing skeleton motions and using it for detecting motions,moving mouse cursor and etcShammateh: Shammateh is a simple time tracking which tends to be a personal time-sheet manager. It's based on Xoqal framework.SharePoint Webtools: SharePoint Webtools is a web application that makes administering a SharePoint Team Site more convenient. It is fully client side and requires no installation.TestOAuth: This is a test project.Type Implementer: A small .NET library (based on System.Reflection.Emit) whose purpose is to facilitate dynamic type generation at run-time.Visualizer3D: TBAWarmMeUp: WarmMeUp is the first SharePoint 2013 (compatible 2010) warm Up tool designed for large SharePoint farms (warming on every server of the farm)Xoqal: Xoqal is an application framework targeting .NET 4+ platform. It supports both of the Web and Win applications with the same infrastructure.ZipFileEx: ZipFileEx add feature that support async/await and IProgress<T> to ZipFile/ZipArchive Classes.

    Read the article

  • BizTalk and SQL: Alternatives to the SQL receive adapter. Using Msmq to receive SQL data

    - by Leonid Ganeline
    If we have to get data from the SQL database, the standard way is to use a receive port with SQL adapter. SQL receive adapter is a solicit-response adapter. It periodically polls the SQL database with queries. That’s only way it can work. Sometimes it is undesirable. With new WCF-SQL adapter we can use the lightweight approach but still with the same principle, the WCF-SQL adapter periodically solicits the database with queries to check for the new records. Imagine the situation when the new records can appear in very broad time limits, some - in a second interval, others - in the several minutes interval. Our requirement is to process the new records ASAP. That means the polling interval should be near the shortest interval between the new records, a second interval. As a result the most of the poll queries would return nothing and would load the database without good reason. If the database is working under heavy payload, it is very undesirable. Do we have other choices? Sure. We can change the polling to the “eventing”. The good news is the SQL server could issue the event in case of new records with triggers. Got a new record –the trigger event is fired. No new records – no the trigger events – no excessive load to the database. The bad news is the SQL Server doesn’t have intrinsic methods to send the event data outside. For example, we would rather use the adapters that do listen for the data and do not solicit. There are several such adapters-listeners as File, Ftp, SOAP, WCF, and Msmq. But the SQL Server doesn’t have methods to create and save files, to consume the Web-services, to create and send messages in the queue, does it? Can we use the File, FTP, Msmq, WCF adapters to get data from SQL code? Yes, we can. The SQL Server 2005 and 2008 have the possibility to use .NET code inside SQL code. See the SQL Integration. How it works for the Msmq, for example: ·         New record is created, trigger is fired ·         Trigger calls the CLR stored procedure and passes the message parameters to it ·         The CLR stored procedure creates message and sends it to the outgoing queue in the SQL Server computer. ·         Msmq service transfers message to the queue in the BizTalk Server computer. ·         WCF-NetMsmq adapter receives the message from this queue. For the File adapter the idea is the same, the CLR stored procedure creates and stores the file with message, and then the File adapter picks up this file. Using WCF-NetMsmq adapter to get data from SQL I am describing the full set of the deployment and development steps for the case with the WCF-NetMsmq adapter. Development: 1.       Create the .NET code: project, class and method to create and send the message to the MSMQ queue. 2.       Create the SQL code in triggers to call the .NET code. Installation and Deployment: 1.       SQL Server: a.       Register the CLR assembly with .NET (CLR) code b.      Install the MSMQ Services 2.       BizTalk Server: a.       Install the MSMQ Services b.      Create the MSMQ queue c.       Create the WCF-NetMsmq receive port. The detailed description is below. Code .NET code … using System.Xml; using System.Xml.Linq; using System.Xml.Serialization;   //namespace MyCompany.MySolution.MyProject – doesn’t work. The assembly name is MyCompany.MySolution.MyProject // I gave up with the compound namespace. Seems the CLR Integration cannot work with it L. Maybe I’m wrong.     public class Event     {         static public XElement CreateMsg(int par1, int par2, int par3)         {             XNamespace ns = "http://schemas.microsoft.com/Sql/2008/05/TypedPolling/my_storedProc";             XElement xdoc =                 new XElement(ns + "TypedPolling",                     new XElement(ns + "TypedPollingResultSet0",                         new XElement(ns + "TypedPollingResultSet0",                             new XElement(ns + "par1", par1),                             new XElement(ns + "par2", par2),                             new XElement(ns + "par3", par3),                         )                     )                 );             return xdoc;         }     }   //////////////////////////////////////////////////////////////////////// … using System.ServiceModel; using System.ServiceModel.Channels; using System.Transactions; using System.Data; using System.Data.Sql; using System.Data.SqlTypes;   public class MsmqHelper {     [Microsoft.SqlServer.Server.SqlProcedure]     // msmqAddress as "net.msmq://localhost/private/myapp.myqueue";     public static void SendMsg(string msmqAddress, string action, int par1, int par2, int par3)     {         using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))         {             NetMsmqBinding binding = new NetMsmqBinding(NetMsmqSecurityMode.None);             binding.ExactlyOnce = true;             EndpointAddress address = new EndpointAddress(msmqAddress);               using (ChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, address))             {                 IOutputChannel channel = factory.CreateChannel();                 try                 {                     XElement xe = Event.CreateMsg(par1, par2, par3);                     XmlReader xr = xe.CreateReader();                     Message msg = Message.CreateMessage(MessageVersion.Default, action, xr);                     channel.Send(msg);                     //SqlContext.Pipe.Send(…); // to test                 }                 catch (Exception ex)                 { …                 }             }             scope.Complete();         }     }   SQL code in triggers   -- sp_SendMsg was registered as a name of the MsmqHelper.SendMsg() EXEC sp_SendMsg'net.msmq://biztalk_server_name/private/myapp.myqueue', 'Create', @par1, @par2, @par3   Installation and Deployment On the SQL Server Registering the CLR assembly 1.       Prerequisites: .NET 3.5 SP1 Framework. It could be the issue for the production SQL Server! 2.       For more information, please, see the link http://nielsb.wordpress.com/sqlclrwcf/ 3.       Copy files: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” If your machine is a 64-bit, run two commands: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” >copy “\Windows\Microsoft.net\Framework64\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” 4.       Execute the SQL code to register the .NET assemblies: -- For x64 OS: CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.Web.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Xml.Linq] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll' WITH permission_set = unsafe   -- For x32 OS: --CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Web.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe 5.       Register the assembly with the external stored procedure: CREATE ASSEMBLY [HelperClass] AUTHORIZATION dbo FROM ’<FilePath>MyCompany.MySolution.MyProject.dll' WITH permission_set = unsafe where the <FilePath> - the path of the file on this machine! 6. Create the external stored procedure CREATE PROCEDURE sp_SendMsg (        @msmqAddress nvarchar(100),        @Action NVARCHAR(50),        @par1 int,        @par2 int,        @par3 int ) AS EXTERNAL NAME HelperClear.MsmqHelper.SendMsg   Installing the MSMQ Services 1.       Check if the MSMQ service is NOT installed. To check:  Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, search to the “Message Queuing”. If you cannot see it, follow next steps. 2.       Start / Control Panel / Programs and Features 3.       Click “Turn Windows Features on or off” 4.       Click Features, click “Add Features” 5.       Scroll down the feature list; open the “Message Queuing” / “Message Queuing Services”; and check the “Message Queuing Server” option  6.       Click Next; Click Install; wait to the successful finish of the installation Creating the MSMQ queue We don’t need to create the queue on the “sender” side. On the BizTalk Server Installing the MSMQ Services The same is as for the SQL Server. Creating the MSMQ queue 1.       Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, open the “Message Queuing”, and open the “Private Queues”. 2.       Right-click the “Private Queues”; choose New; choose “Private Queue”. 3.       Type the Queue name as ’myapp.myqueue'; check the “Transactional” option. Creating the WCF-NetMsmq receive port I will not go through this step in all details. It is straightforward. URI for this receive location should be 'net.msmq://localhost/private/myapp.myqueue'. Notes ·         The biggest problem is usually on the step the “Registering the CLR assembly”. It is hard to predict where are the assemblies from the assembly list, what version should be used, x86 or x64. It is pity of such “rude” integration of the SQL with .NET. ·         In couple cases the new WCF-NetMsmq port was not able to work with the queue. Try to replace the WCF- NetMsmq port with the WCF-Custom port with netMsmqBinding. It was working fine for me. ·         To test how messages go through the queue you can turn on the Journal /Enabled option for the queue. I used the QueueExplorer utility to look to the messages in Journal. The Computer Management can also show the messages but it shows only small part of the message body and in the weird format. The QueueExplorer can do the better job; it shows the whole body and Xml messages are in good color format.

    Read the article

  • How to force remove a package if dpkg removal script fails?

    - by fodon
    I'm trying to remove a package where I deleted the /etc/init.d/disco-master file (in an attempt to remove the package manually). I want to remove the disco-master package. How do I do this now? This is what happens when I do sudo apt-get remove disco-master: removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--remove): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master E: Sub-process /usr/bin/dpkg returned an error code (1) When I do sudo apt-get install --reinstall disco-master I get the following: You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.2+nmu1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I do sudo apt-get -f install I get this: Unpacking disco-node (from .../disco-node_0.4.2+nmu1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb (--unpack): trying to overwrite '/usr/lib/disco/master/ebin/disco.app', which is also in package disco-master 0.4.1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) When I run sudo apt-get remove disco-node I get the following: Package disco-node is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.1) but it is not going to be installed Depends: python-disco (= 0.4.1) but 0.4.2+nmu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I did sudo dpkg -P --force-all disco-master I got: Removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--purge): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master

    Read the article

  • How to determine where on a path my object will be at a given point in time?

    - by Dave
    I have map and an obj that is meant to move from start to end in X amount of time. The movements are all straight lines, as curves are beyond my ability at the moment. So I am trying to get the object to move from these points, but along the way there are way points which keep it on a given path. The speed of the object is determined by how long it will take to get from start to end (based on X). This is what i have so far: //get_now() returns seconds since epoch var timepassed = get_now() - myObj[id].start; //seconds since epoch for departure var timeleft = myObj[id].end - get_now(); //seconds since epoch for arrival var journey_time = 60; //this means 60 minutes total journey time var array = [[650,250]]; //way points along the straight paths if(step == 0 || step =< array.length){ var destinationx = array[step][0]; var destinationy = array[step][1]; }else if( step == array.length){ var destinationx = 250; var destinationy = 100; } else { var destinationx = myObj[id].startx; var destinationy = myObj[id].starty; } step++; When the user logs in at any given time, the object needs to be drawn in the correct place of the path, almost as if its been travelling along the path whilst the user has not been at the PC with the available information i have above. How do I do this? Note: The camera angle in the game is a birds eye view so its a straight forward X:Y rather than isometric angles.

    Read the article

  • What resources are there for creating a dedicated NES emulator box?

    - by normalocity
    Where do I start, and what communities should I get involved in, in order to achieve the following? Ideally, I'd like to have a box that does the following (doesn't have to do this out of the box, I'm just looking to be able to achieve these goals through configs and necessary dependencies): Either bypasses login, or auto login Auto-start FCEUX with options that will (a) automatically start a ROM of my choosing, and (b) go into full-screen mode. You can assume that before I get that far, I've already configured the input devices and video options. I'd like to create (or install, if it exists) a full-screen app that takes a list of ROMs, allows me to select one with a gamepad/arcade stick, and press a button to open that game Be able to map a button on a gamepad/arcade stick to the "Power off" or exit function of the emulator, such that it will take me back to the ROM selection screen. I've already successfully installed FCEUX and tested it with an arcade stick I own, so I'm not looking for an emulator installer guide. I don't know if the ROM selector app exists already, but I'm a Java developer, and could probably create one (so long as it's not too difficult to support controllers - I was thinking of using Slick2D for this - a gaming library that I'm already pretty familiar with). The goal would be a dedicated box that I have connected to my TV. I power it on. It boots up and starts the ROM selection app, which passes the proper parameters to FCEUX (or another emulator that I might switch to at a later time), and I'm ready to go. Basically an NES emulator as a real, living room console. Also, as far as mapping a controller button to functions in the app, well, I've also played around with hardware, and it would be pretty trivial for me to modify a gamepad to trigger key presses. I just don't want to go to that length if it's not necessary.

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • CodePlex Daily Summary for Tuesday, July 30, 2013

    CodePlex Daily Summary for Tuesday, July 30, 2013Popular ReleasesnopCommerce. Open source shopping cart (ASP.NET MVC): nopCommerce 3.10: Highlight features & improvements: • Performance optimization. • New more user-friendly product/product-variant logic. Now we'll have only products (simple and grouped). • Bundle products support added. • Allow a store owner to associate product image for product variant attribute values. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).Small Tools: Helpers 1.01: Fix params count issue Fix STAThread issue Add support for exe.config filesExtJS based ASP.NET Controls: FineUI v3.3.1: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,???????????ExtJS?: 1. ????? FineUI ? ExtJS ?:http://fineui.com/bbs/fo...AutoNLayered - Domain Oriented N-Layered .NET 4.5: AutoNLayered v1.0.5: - Fix Dtos. Abstract collections replaced by concrete (correct serialization WCF). - OrderBy in navigation properties. - Unit Test with Fakes. - Map of entities/dto moved to application services. - Libraries updated. Warning using Fakes: http://connect.microsoft.com/VisualStudio/feedback/details/782031/visual-studio-2012-add-fakes-assembly-does-not-add-all-needed-referencesPath Copy Copy: 11.1: Minor release with two new features: Submenu's contextual menu item now has an icon next to it Added reference to JavaScript regular expression format in Settings application Since this release does not have any glaring bug fixes, it is more of an optional update for existing users. It depends on whether you want to be able to spot the Path Copy Copy submenu more easily. I recommend you install it to see if the icon makes sense. As always, please don't hesitate to leave feedback via Discus...CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.0 RC3: This is the third release candidate of CMake Tools for Visual Studio 1.0, which contains the following bug fixes: Opening a CMake file from Windows Explorer while Visual Studio is already open will no start a new instance of Visual Studio. Typing a symbol while the IntelliSense list box is visible and the text typed so far does not match any item in the list will dismiss the list box and insert the symbol typed.R.NET: R.NET 1.5: The major changes in v1.5 are: Initialize method must be called before using R. Settings should be passed to the method. EagerEvaluate method renamed to Evaluate (use Defer method when you want old version of Evaluate).Media Companion: Media Companion MC3.574b: Some good bug fixes been going on with the new XBMC-Link function. Thanks to all who were able to do testing and gave feedback. New:* Added some adhoc extra General movie filters, one of which is Plot = Outline (see fixes above). To see the filters, add the following line to your config.xml: <ShowExtraMovieFilters>True</ShowExtraMovieFilters>. The others are: Imdb in folder name, Imdb in not folder name & Imdb not in folder name & year mismatch. * Movie - display <tag> list on browser tab ...OfflineBrowser: Preview Release with Search: I've added search to this release.VG-Ripper & PG-Ripper: VG-Ripper 2.9.46: changes FIXED LoginMath.NET Numerics: Math.NET Numerics v2.6.0: What's New in Math.NET Numerics 2.6 - Announcement, Explanations and Sample Code. New: Linear Curve Fitting Linear least-squares fitting (regression) to lines, polynomials and linear combinations of arbitrary functions. Multi-dimensional fitting. Also works well in F# with the F# extensions. New: Root Finding Brent's method. ~Candy Chiu, Alexander Täschner Bisection method. ~Scott Stephens, Alexander Täschner Broyden's method, for multi-dimensional functions. ~Alexander Täschner ...AJAX Control Toolkit: July 2013 Release: AJAX Control Toolkit Release Notes - July 2013 Release Version 7.0725July 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...MJP's DirectX 11 Samples: Specular Antialiasing Sample: Sample code to complement my presentation that's part of the Physically Based Shading in Theory and Practice course at SIGGRAPH 2013, entitled "Crafting a Next-Gen Material Pipeline for The Order: 1886". Demonstrates various methods of preventing aliasing from specular BRDF's when using high-frequency normal maps. The zip file contains source code as well as a pre-compiled x64 binary.EXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DataPieV3.6.1: ????csv????,???sql??,??csv????Qibla Compass for Windows Phone: Qibla Compass for Windows Phone: This release is in open beta version. You can always download and provide your feedback. Since it was just developed to give users an idea of Qibla Direction and its mapping therefore you might not see major releases in future.Event Scavenger: Version 5: I've decided to do a full (recommended) release of version 5. I've been running it myself for months and did not have any issues with it yet. This release just contains the installs. The web site's documentation has not been updated yet and reflects the previous version details. If you have an issue with this version then you can happily switch back to 4.x. Version 5 can run side-by-side with earlier versions (service) as it has a new service and database.wpadk: WPadk_WP8???: ???:V1.1 ??wp???????????????wp8???????StockSharp: StockSharp 4.1.16: ?????? ????????? - http://stocksharp.com/forum/yaf_postsm28239_S--API-4-1.aspx#post28239GeoTransformer: GeoTransformer 4.5: Extensions can now be installed and uninstalled from the application. The extensions update the same way as the application - silently and automatically. Added ability to search for caches by pressing CTRL+F in the table views. (Thanks to JanisU for implementing this request) Added ability to remove edited customizations for multiple caches at once (use SHIFT or CTRL to select multiple lines in the table). A new experimental version for Windows 8 RT (on ARM processor) is also made availa...Kartris E-commerce: Kartris v2.5003: This fixes an issue where search engines appear to identify as IE and so trigger the noIE page if there is not a non-responsive skin specified.New ProjectsBus Booking System: Bus Booking systemC#??????: ????C#??????????????。Cotizav 2.0: Este proyecto es para el soporte de Cotizaciones.DeferredShading: deferred shading rendererIVR Junction: IVR Junction connects an Interactive Voice Response (IVR) system to cloud services such as YouTube, Facebook and other social media.Mac Address Changer: It's a quite and easy tool to change your mac addressmotokraft user control: user control for motokraftSingle Reference JavaScript Pattern: This is very simple pattern. In here you need to only refer one script in a page. I'm sure it is saving your development time as well as maintenance timeSocial_Life_Time: This is social network that people can communicate with each otherThe Ironic Text Based MMORPG: Modern MMORPGs have become highly interactive, complex systems of skills, stats, and action combat. This game introduces a new level of text based immersion!Timeline Year Control: Timeline Year Control An ASP.Net year indicator timeline control.winrtsock: winsock façade for Windows Runtime for porting bsd socket code to Windows RuntimeZker: No summary?????: C#?????

    Read the article

  • Installation experiences with NDepend under Win7/64 with restricted user permissions

    - by Marko Apfel
    Today Patrick gives me a new license for his static code analysis tool NDepend for my fresh machine with Win7/64. This platform is new for me, so some things are different to Win XP. Maybe that till yet some of these things are not well enough understandanded from me. So i stepped in some traps. Here are my notes to get NDepend running. Download of NDepend Professional Edition from http://www.ndepend.com/NDependDownload.aspx   Extracted to c:\program files (x86)\NDepend   Started NDepend.Install.VisualStudioAddin.exe this failed with Okay – sounds plausible.   Copy NDependProLicense.xml to this folder   Next try with NDepend.Install.VisualStudioAddin.exe opens the integration dialog   Registering in Visual Studio failed with   Manually unblock as described (first solution hint)   and here comes my largest understanding problem. After unblocking this file   and closing this dialog the next opening shows the blocking again: Why? So the same error during integration pops up.   Okay – tried the second solution hint with copying folders Copy all to a full accessable folder under c:\temp\   Now the installation works   looks good   copying the folders back to c:\program files (x86)\NDepend   starting Visual Studio failed with     Okay – copying the folder to a private application folder c:\users\apf\My Applications\NDepend   Installing again   Now Visual Studio runs and NDepend is integrated Nevertheless my machine is only used by me, i prefer “all user”-installations. The described way works sadly only for my account.

    Read the article

  • /tmp shows 690 Mb full, actual size 72 K, Why?

    - by Ankit
    Why is /tmp diretory on my system showing 690 Mb full, whereas du -sh /tmp shows only 72K full. drwxrwxrwt 2 lightdm lightdm 4096 Aug 29 21:49 at-spi2 drwx------ 2 ankit ankit 4096 Aug 29 21:50 keyring-0JTfoY drwx------ 2 ankit ankit 4096 Aug 29 21:44 keyring-rChLLL drwx------ 2 root root 16384 Jul 22 02:10 lost+found drwx------ 2 ankit ankit 4096 Jan 1 1970 orbit-ankit drwx------ 2 lightdm lightdm 4096 Aug 29 21:50 pulse-2L9K88eMlGn7 drwx------ 2 root root 4096 Aug 29 21:44 pulse-PKdhtXMmr18n drwx------ 2 ankit ankit 4096 Aug 29 21:50 pulse-zR1TZUAZfmQW drwx------ 2 ankit ankit 4096 Aug 29 21:44 ssh-dlslOXOq2203 drwx------ 2 ankit ankit 4096 Aug 29 21:50 ssh-MrQQVRyy3316 -rw------- 1 ankit ankit 0 Aug 29 21:45 tmp0qnNG4 -rw------- 1 ankit ankit 0 Aug 29 21:50 tmpVvSMt6 -rw------- 1 ankit ankit 0 Aug 29 21:49 tmpy9Gadz -rw-rw-r-- 1 lightdm lightdm 0 Aug 29 21:44 unity_support_test.0 ankit@duster:/tmp$ df -h df: `/home/ankit/.gvfs': Transport endpoint is not connected Filesystem Size Used Avail Use% Mounted on /dev/sda1 79G 11G 65G 14% / udev 2.9G 4.0K 2.9G 1% /dev tmpfs 1.2G 868K 1.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.9G 220K 2.9G 1% /run/shm /dev/sda7 38G 690M 35G 2% /tmp /dev/sda5 93G 26G 63G 30% /home /dev/sda6 93G 1.6G 87G 2% /boot /dev/sda3 154G 69G 78G 48% /home/mount_150 ankit@duster:/tmp$ ankit@duster:/tmp$ ankit@duster:/tmp$ sudo du -sh /tmp/ 72K

    Read the article

  • Server-side Architecture for Online Game

    - by Draiken
    Hi, basically I have a game client that has communicate with a server for almost every action it takes, the game is in Java (using LWJGL) and right now I will start making the server. The base of the game is normally one client communicating with the server alone, but I will require later on for several clients to work together for some functionalities. I've already read how authentication server should be sepparated and I intend on doing it. The problem is I am completely inexperienced in this kind of server-side programming, all I've ever programmed were JSF web applications. I imagine I'll do socket connections for pretty much every game communication since HTML is very slow, but I still don't really know where to start on my server. I would appreciate reading material or guidelines on where to start, what architecture should the game server have and maybe some suggestions on frameworks that could help me getting the client-server communication. I've looked into JNAG but I have no experience with this kind of thing, so I can't really tell if it is a solid and good messaging layer. Any help is appreciated... Thanks !

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • DropSpace Syncs Android Files to Dropbox

    - by ETC
    DropSpace is a free Android application that fixes the primary issue that plagues the official Dropbox app for Android–the lack of true file synchronization. Grab a copy of DropSpace and start enjoying true file syncing on the go. The official Dropbox app is limited to grabbing files from your Dropbox account or pushing files from your phone to your Dropbox account. Actual file synchronization, this manual push/pull model aside, is nowhere to be found. DropSpace fills that gap by enabling file synchronization between your SD card directories and your Dropbox directories. It’s packed with handy features including restricting file syncing to Wi-Fi connection only (great if you don’t want to chew up your very limited data plan) as well as numerous toggles for various settings like whether it should delete remote files if the local file is deleted, how often it should run the sync service, and more. Hit up the link below to grab a copy and take it for a test drive. DropSpace is free and works wherever Android does; Dropbox account required. DropSpace [via Addictive Tips] Latest Features How-To Geek ETC Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 Access the Options for Your Favorite Extensions Easier in Firefox Don’t Sleep Keeps Your Windows Machine Awake DropSpace Syncs Android Files to Dropbox Field of Poppies Wallpaper The History Of Operating Systems [Infographic] DriveSafe.ly Reads Your Text Messages Aloud

    Read the article

  • mpd conflicting with other applications -- taking control of pulse?

    - by Jamie Schembri
    Simple explanation If mpd is playing and sound attempts to play through another application, x, sound from x will not be output. If sound from another application, x, is playing and mpd then attempts to play, no sound will be output from mpd whilst sound from x continues to play. Details I first noticed this problem with Flash, and this continues to be the most common scenario. I posted a question about this before realising it was not strictly Flash-related, but instead is something to do with mpd. My biggest frustration comes from trying to get mpd working again, as I can't seem to pin down any method. Sometimes pulseaudio -k seems to help, other times sudo /etc/init.d/mpd restart, others killing Chromium (due to Flash) with SIGTERM. Most of the time it's a combination of the above. I think this might be because I run mpd as another user and use pulseaudio. It is not run as root or current user. Also, mpd is compiled with pulse support. I have tried numerous things, however I honestly couldn't recite what, as it has been some time since. I'd rather not go poking around without some direction, but I'd be really happy to fix this problem once and for all. mpd.conf Simplified by removing comments/blank lines. music_directory "/var/lib/mpd/music" playlist_directory "/var/lib/mpd/playlists" db_file "/var/lib/mpd/tag_cache" log_file "/var/log/mpd/mpd.log" pid_file "/var/run/mpd/pid" state_file "/var/lib/mpd/state" user "mpd" bind_to_address "wilson" input { plugin "curl" } audio_output { type "pulse" name "My Pulse Output" } filesystem_charset "UTF-8" id3v1_encoding "UTF-8" Question For the sake of keeping this a question: does anyone know what is causing this, or how to fix it?

    Read the article

  • Speed up loading of test results from builds in Visual Studio

    - by Jakob Ehn
    I still see people complaining about the long time it takes to load test results from a TFS build in Visual Studio. And they make a valid point, it does take a very long time to load the test results, even for a small number of tests. The reason for this is that the test results is not just the result of the test run but also all the binaries that were part of the test run. This often also means that the debug symbols (*.pdb) will be downloaded to your local machine. This reason for this behaviour is that it letsyou re-run the tests locally. However, most of the times this is not what the developer will do, they just want to know which tests failed and why. They can then fix the tests and rerun them locally. It turns out there is a way to load only the test results, which is much faster. The only tricky bit is to find the location of the .trx file that is generated during the build. Particularly in TFS 2010 where you often have multiple build agents, which of corse results in different paths to the trx file. Note: To use this you must have read permission to the build folder on the build agent where the build was executed. Open the build result for the build Click View Log Locate the part where MSTest is invoked. When using test containers, it looks like this:   Note: You can actually search in the log window, press Ctrl+F and you will get a little search box at the bottom. Nice! On the MSTest command line call, locate the /resultsfileroot parameter, which points to the folder where the test results are stored Note that this path is local for the build server, so you need to replace the drive letter with the server name: D:\Builds\Project\TestResults to \Project\TestResults">\\<BuildServer>\Project\TestResults Double-click on the .trx file and you will notice that it loads much faster compared to opening it from the build log window

    Read the article

  • Help in (re)designing my Swing application

    - by Harihar Das
    I have developed a Swing application that controls execution of several script like jobs. I need to display the interim output of the jobs concurrently. I have followed MVC while writing the application. The application is working as expected. But off late I have the following requirements in hand: A few of the script jobs need special user privileges to execute so as to access specialized resources. There seems to be now way in Java to impersonate as a different user while running an application.[examined in this question]. Also trying to run the Swing application as a scheduled task in windows is not helping. Once started the jobs should be running even if the user logs off after starting the jobs. I am thinking of separating the execution logic from the UI and run that as a service; and introduce JMS in between the two layers so as to store/retrieve the interim the output. Note: I need to run this application on windows Any ideas on meeting my requirements will be highly appreciated.

    Read the article

  • 10.10 boots to command line login prompt

    - by greggory.hz
    I recently installed Ubuntu 10.10 on a computer that was previously running 10.04 (that worked fine). Now, each time I boot up, it starts up in a command line login prompt. I can login and it stays at the command line (as expected). I can then manually start gdm with sudo start gdm and it works fine. I can also enable compiz (using proprietary nvidia drivers) so I'm reasonably confident that it's not a driver problem (at least not in the sense that the drivers just flat out aren't working). Interestingly, if I leave it at the command prompt without logging in, after about 5 or 10 minutes, gnome starts up on its own. I'm not sure what is causing this. This is what dmesg | tail gives me after a manual start of gdm: [ 15.664166] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 270.18 Tue Jan 18 21:46:26 PST 2011 [ 15.991304] type=1400 audit(1297543976.953:11): apparmor="STATUS" operation="profile_load" name="/usr/share/gdm/guest-session/Xsession" pid=990 comm="apparmor_parser" [ 16.606986] eth0: link up, 100Mbps, full-duplex, lpa 0xCDE1 [ 18.798506] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 26.740010] eth0: no IPv6 routers present [ 90.444593] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 189.252208] audit_printk_skb: 21 callbacks suppressed [ 189.252213] type=1400 audit(1297544150.218:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1876 comm="apparmor_parser" [ 189.252584] type=1400 audit(1297544150.218:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1876 comm="apparmor_parser" [ 351.159585] lo: Disabled Privacy Extensions

    Read the article

  • 2D platformers: why make the physics dependent on the framerate?

    - by Archagon
    "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing.

    Read the article

  • Windows 7 disappeared in list of Grub while loading

    - by Riyad A.
    Installed Ubuntu 12.04 alongside the Windows 7 two weeks ago. Initially haven't any issues with that. day ago installed updates on Ubuntu and after restarting the system found the absence of Win7 in Grub list. Before the HDD has been partitioned on two volumes Disk C and Work Disk(don't remember the name). When doing the fdisk -l: Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xa93031e0 Device Boot Start End Blocks Id System /dev/sda1 * 2048 408833842 204415897+ 7 HPFS/NTFS/exFAT /dev/sda2 488386560 976773119 244193280 7 HPFS/NTFS/exFAT /dev/sda3 408834046 488386559 39776257 5 Extended Partition 3 does not start on physical sector boundary. /dev/sda5 408834048 484421631 37793792 83 Linux /dev/sda6 484423680 488386559 1981440 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/mmcblk0: 3965 MB, 3965190144 bytes 49 heads, 48 sectors/track, 3292 cylinders, total 7744512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/mmcblk0p1 8192 7744511 3868160 b W95 FAT32 When sudo mount /dev/sda ~/1 -o offset [488386560*512] - opens and mounts WORK disk. Need help: how to See and mount disk C. how to see and adjust the Grub to appear both systems in Grub menu when loading?

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >