Search Results

Search found 7742 results on 310 pages for 'continuous learning'.

Page 254/310 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Where is SQLite on Nokia N810? How to get?

    - by klausnrooster
    I need SQLite3 with the TCL interface for a simple command-line app I use on all my PCs and want to use on my new (to me) Nokia N810. But "find" does not find any SQLite on the Nokia N810. Several apps use it, supposedly, like the maps/gps (poi.db), and others. I got sudo gainroot before using find, tried with wildcards * and dot . No luck. I guess it's embedded in the apps that use it? The N810's app manager doesn't offer SQLite. Apt-get has it and then fails with error: "Unable to fetch some archives, ..." The "update" and "--fix-missing" options do not help. I don't really want to get scratchbox and compile SQLite and TCL extension for it from sources. I looked at README and it seems like it might be straightforward. However [1] it's not clear what scratchbox download I would need (there are dozens in the Index of /download/files/sbox-releases/hathor/deb/i386), [2] I've never compiled any C sources successfully before, [3] and the whole mess entails a lot of learning curve for what is a very small one-off personal project. http://www.gronmayer.com/ has a few (sqlite (v. 2.8.17-4), libsqlite0 (v. 2.8.17-4), and libsqlite0-dev (v. 2.8.17-4)) but they appear to part of some huge package, may or may not include the TCL interface, ... and no I don't want to port my tiny app to Python. Is there some option I've overlooked? Must I get scratchbox and compile for ARM myself? Thanks for your time.

    Read the article

  • I have just created a subnet for a local network, connecting to a standalone server on another network, now I cannot connect to the internet

    - by Seth
    I am just learning some new aspects of servers and networking. We have a network of 5 subnets that all interconnect with each-other. In order to get two computers on the subnet that we were setting up, I changed the IP from the subnet where the standalone server is on (where they used to be set up)to the local subnet we are remotely hooking up. Likewise I also changed the gateway to coincide with the new subnet. Only problem is that since doing this, I am unable to establish a connection to the internet. I can ping the server and correspong gateway & DNS server, but cannot get connected to the internet. We do have a dumb-switch (non-programmable) connected that receives both the internet and private network inputs and distributes (or should do so) to about 5 other computers. Bottom line, I cannot currently connect to the internet, and am wondering what could be causing this.. It is likely something very obvious and pardon me being more vague than I probably should be, but I could use some help resolving this! Thanks for any help!

    Read the article

  • <msbuild/> task fails while <devenv/> succeeds for MFC application in CruiseControl.NET?

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (<devenv/>) works, but a simple MSBuild wrapper script (see below) run via the CCNet <msbuild/> task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors The Simple MSBuild Wrapper <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Build"> <!-- Doing some versioning stuff here--> <MSBuild Projects="target.sln" Properties="Configuration=ReleaseUnicode;Platform=Any CPU;..." /> </Target> </Project> My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. Given the vs2010 msbuild support for visual c++ projects (.vcxproj), shouldn't msbuilding a solution be pretty close to compiling via visual studio? But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level. Update 1 This appears to only ever happen in two cases: resource compilation (rc.exe), and precompiled header (stdafx.h) compilation, and only for certain projects? I was thinking it was across the board, but indeed it appears only to be in these cases. I guess I will keep digging and hope someone has some insight they would be willing to share...

    Read the article

  • How to configure Hudson and git plugin with an SSH key

    - by jlpp
    I've got Hudson (continuous integration system) with the git plugin running on a Tomcat Windows Service. msysgit is installed and the msysgit bin dir is in the path. PuTTY/Pageant/plink are installed and msysgit is configured to use them. When I run a job that attempts to clone the git repository I get the following error: $ git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" ERROR: Error cloning remote repo 'origin' : Could not clone git@hostname:project.git ERROR: Cause: Error performing git clone -o origin git@hostname:project.git e:\HUDSON_HOME\jobs\Project Trunk\workspace Trying next repository ERROR: Could not clone from a repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone Running git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" from the command line works without error. I've confirmed that my issue is not the same as http://stackoverflow.com/questions/1177292/hudson-git-clone-error because git is in the path and I don't get any error about the git executable on Hudson's Configure System page. This leads me to believe that the problem is that the user who owns the Tomcat/Hudson Windows service (Local System) has no SSH key set up to be able to clone the git repository. My question is, how can I set things up so that the git plugin/msysgit know to use a particular SSH key when trying to clone? I don't think Pageant will work because the Tomcat service is running as the "Local System" user, but I may be wrong. I have tried setting Pageant up as a service (using runassvc.exe), passing the appropriate key, and having it run as "Local System". The Tomcat/Hudson service doesn't seem to be able to see the key from the pageant service. Are there any other techniques for setting up a key? Thanks. EDIT: The discussion on http://n4.nabble.com/Hudson-with-git-and-ssh-td375633.html shows that someone else had a similar question. ssh-agent was suggested and this tool does come with msysgit but I'm not sure how to use it in conjunction with the Hudson service. Still, good clue if anyone can fill in the gaps. Thanks to Peter for the comment with the link. Also, the discussion on http://n4.nabble.com/questions-about-git-and-github-plug-ins-td383420.html starts off with the same question. I'm trying to resurrect that thread.

    Read the article

  • iPhone SDK: Audio Queue control

    - by codemercenary
    Hi all, I am new to the audio queue services so I have taken an example from a book called iPhone Cool Projects where it describes how to stream audio. I want to extend this to being able to play a continuous playlist of links to mp3 files like an internet radio. The problem with the example code it that it does not detect when a stream ends and does not call AudioQueueStop at any point, so I added a counter to number of buffers added to the queue, and then decrement this counter each time audioQueueOutputCallback is called by the queue. This works fine except if when the buffer count goes to 0, and then I add a call AudioQueueFlush(audioQueue) and then AudioQueueStop(audioQueue, false) I get an error. If I only call AudioQueueReset, it continues to load the buffers again, but plays them out faster then it loads them... getting stuck in a loop and then crashing. 2010-04-14 13:56:29.745 AudioPlayer[2269:207] init player with URL 2010-04-14 13:56:29.941 AudioPlayer[2269:207] did recieve data 2010-04-14 13:56:29.942 AudioPlayer[2269:207] audio request didReceiveData 2010-04-14 13:56:29.944 AudioPlayer[2269:207] >>> start audio queue 2010-04-14 13:56:29.960 AudioPlayer[2269:207] packetCallback count 2 2010-04-14 13:56:29.961 AudioPlayer[2269:207] add buffer: 1 2010-04-14 13:56:29.962 AudioPlayer[2269:207] did recieve data 2010-04-14 13:56:29.963 AudioPlayer[2269:207] audio request didReceiveData 2010-04-14 13:56:29.963 AudioPlayer[2269:207] packetCallback count 1 2010-04-14 13:56:29.964 AudioPlayer[2269:207] add buffer: 2 2010-04-14 13:56:29.965 AudioPlayer[2269:207] packetCallback count 13 2010-04-14 13:56:29.967 AudioPlayer[2269:207] add buffer: 3 2010-04-14 13:56:29.968 AudioPlayer[2269:207] done with buffer: 3 2010-04-14 13:56:29.969 AudioPlayer[2269:207] done with buffer: 2 2010-04-14 13:56:29.974 AudioPlayer[2269:207] done with buffer: 1 So this loop continues some 20 - 30 times and then it crashes. The first time it plays an audio file it queues up the buffers and then plays sound, but doesn't callback to delete them until some 100 or more have been played. Can anyone explain this behavior? I read that there was a limit of 1 audio queue for MP3 playback for the iPhone. Is that still true? If not then I suppose I should use another audio queue for the next mp3 stream. I've had a look through the apple docs but it doesn't explain this in any particular detail. A better insight into this would be great. TIA.

    Read the article

  • CCNet web dashboard not showing anything when MSBuild fails

    - by cfdev9
    I have a simple project in ccnet using svn & msbuild only. There is a 30 second trigger for svn and the msbuild file compiles a web application then copies it to a numbered build folder. When an error occurs in the msbuild task I get a failed build. When I view a failed build in the web dashboard I can see the 'Modifications since last build' section in the dashboard, but nothing else. I have to click on the build log and read through all of the xml in the error log to see what the error was. Why won't the dashboard show the errors from the build log? I haven't changed anything in the dashboard.config since installing ccnet. Dashboard Version : 1.5.7256.1 <project name="SimpleWebapp1"> <artifactDirectory>C:\Program Files\CruiseControl.NET\server\SimpleWebapp1\Artifacts\</artifactDirectory> <triggers> <intervalTrigger name="continuous" seconds="30" buildCondition="IfModificationExists" initialSeconds="5" /> </triggers> <sourcecontrol type="svn"> <executable>C:\Program Files\CollabNet\Subversion Client\svn.exe</executable> <trunkUrl>https://server:8443/svn/SimpleWebapp1/trunk</trunkUrl> <workingDirectory>D:\CCNetSandbox\SimpleWebapp1</workingDirectory> <username>username</username> <password>password</password> </sourcecontrol> <tasks> <msbuild> <executable> C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe </executable> <workingDirectory> D:\CCNetSandbox\SimpleWebapp1 </workingDirectory> <projectFile>SimpleWebapp1.build</projectFile> <buildArgs>/p:Configuration=Debug /p:Platform="Any CPU"</buildArgs> <targets>CompileLatest</targets> <timeout>900</timeout> <logger>ThoughtWorks.CruiseControl.MsBuild.XMLLogger, C:\Program Files\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger> </msbuild> </tasks> <publishers> <xmllogger /> <buildpublisher> <publishDir>C:\Program Files\CruiseControl.NET\server\SimpleWebapp1\Artifacts\</publishDir> <useLabelSubDirectory>true</useLabelSubDirectory> </buildpublisher> </publishers> </project>

    Read the article

  • Jquery Cycle Plugin Question - turning off relative links to photos so it goes to a URL

    - by alpdog14
    I am using Jquery Cycle Plugin and it has a side panel that highlights as the photos change and currently when I click on the text the associated photo pulls up then I have to click on the photo to go to the URL but I would like the text itself to link to the URL. I have looked at the fn.cycle.defaults but not sure what to change and I tried a few things but nothing works. If anyone can help me figure this out it would be most helpful. Here are the fn.cycle.defaults: fx: 'fade', // one of: fade, shuffle, zoom, scrollLeft, etc timeout: 4000, // milliseconds between slide transitions (0 to disable auto advance) continuous: 0, // true to start next transition immediately after current one completes speed: 1000, // speed of the transition (any valid fx speed value) speedIn: null, // speed of the 'in' transition speedOut: null, // speed of the 'out' transition next: null, // id of element to use as click trigger for next slide prev: null, // id of element to use as click trigger for previous slide prevNextClick: null, // callback fn for prev/next clicks: function(isNext, zeroBasedSlideIndex, slideElement) pager: null, // id of element to use as pager container pagerClick: null, // callback fn for pager clicks: function(zeroBasedSlideIndex, slideElement) pagerEvent: null, // event which drives the pager navigation pagerAnchorBuilder: null, // callback fn for building anchor links before: null, // transition callback (scope set to element to be shown) after: null, // transition callback (scope set to element that was shown) end: null, // callback invoked when the slideshow terminates (use with autostop or nowrap options) easing: null, // easing method for both in and out transitions easeIn: null, // easing for "in" transition easeOut: null, // easing for "out" transition shuffle: null, // coords for shuffle animation, ex: { top:15, left: 200 } animIn: null, // properties that define how the slide animates in animOut: null, // properties that define how the slide animates out cssBefore: null, // properties that define the initial state of the slide before transitioning in cssAfter: null, // properties that defined the state of the slide after transitioning out fxFn: null, // function used to control the transition height: 'auto', // container height startingSlide: 0, // zero-based index of the first slide to be displayed sync: 1, // true if in/out transitions should occur simultaneously random: 0, // true for random, false for sequence (not applicable to shuffle fx) fit: 0, // force slides to fit container pause: true, // true to enable "pause on hover" autostop: 0, // true to end slideshow after X transitions (where X == slide count) autostopCount: 0, // number of transitions (optionally used with autostop to define X) delay: 0, // additional delay (in ms) for first transition (hint: can be negative) slideExpr: null, // expression for selecting slides (if something other than all children is required) cleartype: 0, // true if clearType corrections should be applied (for IE) nowrap: 0 // true to prevent slideshow from wrapping }; I have tried changing the pageClick and pagerEvent but nothing seems to be working. Please help!!!

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • Release Process Improvements

    - by wallismark
    The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next. I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'. So the question is, specify one painful/time consuming part of your release process and what did you do to improve it? My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases. This works reasonably well but the problems with this approach are:- ALL changes in the Dev database are included, some of which may still be 'works in progress'. Sometimes developers made conflicting changes (that were not noticed until the release was in production) It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2). When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly. It's not an ideal scenario for Continuous Integration. So the solution was:- Enforce a policy of all changes to the database must be scripted. A naming convention was important for ensuring the correct running order of the scripts. Create/Use a tool to run the scripts at release time. Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes') The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script. Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth. We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.

    Read the article

  • How is IObservable<double>.Average supposed to work?

    - by Dan Tao
    Update Looks like Jon Skeet was right (big surprise!) and the issue was with my assumption about the Average extension providing a continuous average (it doesn't). For the behavior I'm after, I wrote a simple ContinuousAverage extension method, the implementation of which I am including here for the benefit of others who may want something similar: public static class ObservableExtensions { private class ContinuousAverager { private double _mean; private long _count; public ContinuousAverager() { _mean = 0.0; _count = 0L; } // undecided whether this method needs to be made thread-safe or not // seems that ought to be the responsibility of the IObservable (?) public double Add(double value) { double delta = value - _mean; _mean += (delta / (double)(++_count)); return _mean; } } public static IObservable<double> ContinousAverage(this IObservable<double> source) { var averager = new ContinuousAverager(); return source.Select(x => averager.Add(x)); } } I'm thinking of going ahead and doing something like the above for the other obvious candidates as well -- so, ContinuousCount, ContinuousSum, ContinuousMin, ContinuousMax ... perhaps ContinuousVariance and ContinuousStandardDeviation as well? Any thoughts on that? Original Question I use Rx Extensions a little bit here and there, and feel I've got the basic ideas down. Now here's something odd: I was under the impression that if I wrote this: var ticks = Observable.FromEvent<QuoteEventArgs>(MarketDataProvider, "MarketTick"); var bids = ticks .Where(e => e.EventArgs.Quote.HasBid) .Select(e => e.EventArgs.Quote.Bid); var bidsSubscription = bids.Subscribe( b => Console.WriteLine("Bid: {0}", b) ); var avgOfBids = bids.Average(); var avgOfBidsSubscription = avgOfBids.Subscribe( b => Console.WriteLine("Avg Bid: {0}", b) ); I would get two IObservable<double> objects (bids and avgOfBids); one would basically be a stream of all the market bids from my MarketDataProvider, the other would be a stream of the average of these bids. So something like this: Bid Avg Bid 1 1 2 1.5 1 1.33 2 1.5 It seems that my avgOfBids object isn't doing anything. What am I missing? I think I've probably misunderstood what Average is actually supposed to do. (This also seems to be the case for all of the aggregate-like extension methods on IObservable<T> -- e.g., Max, Count, etc.)

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • Mysql - help me optimize this query

    - by sandeepan-nath
    About the system: -The system has a total of 8 tables - Users - Tutor_Details (Tutors are a type of User,Tutor_Details table is linked to Users) - learning_packs, (stores packs created by tutors) - learning_packs_tag_relations, (holds tag relations meant for search) - tutors_tag_relations and tags and orders (containing purchase details of tutor's packs), order_details linked to orders and tutor_details. For a more clear idea about the tables involved please check the The tables section in the end. -A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is a simpler representation (not the actual) of the more complex query which I am trying to optimize:- I have used statements like explanation of parts in the query select SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) as key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) as key_2_total_matches, td., u., count(distinct(od.id_od)), if (lp.id_lp > 0) then some conditional logic on lp fields else 0 as tutor_popularity from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN `some other tables on lp.id_lp - let's call learning pack tables set (including Learning_Packs table)` LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) where some condition on Users table's fields AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN `some conditions on learning pack tables set` ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN `some conditions on webclasses tables set` ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and some conditions on Orders table's fields ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%") group by td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 ===================================================================== What does the above query do? Does AND logic search on the search keywords (2 in this example - "Democracy" and "Dictatorship"). Returns only those tutors for which both the keywords are present in the union of the two sets - tutors details and details of all the packs created by a tutor. To make things clear - Suppose a Tutor name "Sandeepan Nath" has created a pack "My first pack", then:- Searching "Sandeepan Nath" returns Sandeepan Nath. Searching "Sandeepan first" returns Sandeepan Nath. Searching "Sandeepan second" does not return Sandeepan Nath. ====================================================================================== The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query on heavily loaded databases is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed, but I would appreciate a better query as a solution, optimized as much as possible, displaying the same results ========================================================================================== How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. ==================================================================== The tables Most of the following tables contain many other fields which I have omitted here. CREATE TABLE IF NOT EXISTS users ( id_user int(10) unsigned NOT NULL AUTO_INCREMENT, name varchar(100) NOT NULL DEFAULT '', surname varchar(155) NOT NULL DEFAULT '', PRIMARY KEY (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=636 ; CREATE TABLE IF NOT EXISTS tutor_details ( id_tutor int(10) NOT NULL AUTO_INCREMENT, id_user int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_tutor), KEY Users_FKIndex1 (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=51 ; CREATE TABLE IF NOT EXISTS orders ( id_order int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (id_order), KEY Orders_FKIndex1 (id_user), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=275 ; ALTER TABLE orders ADD CONSTRAINT Orders_ibfk_1 FOREIGN KEY (id_user) REFERENCES users (id_user) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS order_details ( id_od int(10) unsigned NOT NULL AUTO_INCREMENT, id_order int(10) unsigned NOT NULL DEFAULT '0', id_author int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_od), KEY Order_Details_FKIndex1 (id_order) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=284 ; ALTER TABLE order_details ADD CONSTRAINT Order_Details_ibfk_1 FOREIGN KEY (id_order) REFERENCES orders (id_order) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs ( id_lp int(10) unsigned NOT NULL AUTO_INCREMENT, id_author int(10) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (id_lp), KEY Learning_Packs_FKIndex2 (id_author), KEY id_lp (id_lp) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=23 ; CREATE TABLE IF NOT EXISTS tags ( id_tag int(10) unsigned NOT NULL AUTO_INCREMENT, tag varchar(255) DEFAULT NULL, PRIMARY KEY (id_tag), UNIQUE KEY tag (tag), KEY id_tag (id_tag), KEY tag_2 (tag), KEY tag_3 (tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3419 ; CREATE TABLE IF NOT EXISTS tutors_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, KEY Tutors_Tag_Relations (id_tag), KEY id_tutor (id_tutor), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE tutors_tag_relations ADD CONSTRAINT Tutors_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, id_lp int(10) unsigned DEFAULT NULL, KEY Learning_Packs_Tag_Relations_FKIndex1 (id_tag), KEY id_lp (id_lp), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE learning_packs_tag_relations ADD CONSTRAINT Learning_Packs_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; =================================================================================== Following is the exact query (this includes classes also - tutors can create classes and search terms are matched with classes created by tutors):- select count(distinct(od.id_od)) as tutor_popularity, CASE WHEN (IF((wc.id_wc 0), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT'))), 0)) THEN 1 ELSE 0 END as 'classes_published', CASE WHEN (IF((lp.id_lp 0), (lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT'))),0)) THEN 1 ELSE 0 END as 'packs_published', td . * , u . * from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN Learning_Packs_Categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN Learning_Packs_Categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN Learning_Pack_Content as lpct on (lp.id_lp = lpct.id_lp) LEFT JOIN Webclasses_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN WebClasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN Learning_Packs_Categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN Learning_Packs_Categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) OR (t.id_tag = wtagrels.id_tag) where (u.country='IE' or u.country IN ('INT')) AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and o.order_status = 'paid' and CASE WHEN (od.id_wc 0) THEN od.can_attend_class=1 ELSE 1 END ELSE 1 END AND 1 group by td.id_tutor order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 Please note - The provided database structure does not show all the fields and tables as in this query

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Issue while trying to give a floating effect to a footer bar in IE

    - by Shailesh
    Hi, I'm trying to put a footer bar at the bottom of the browser no matter what the length of the content is. The footer bar should always be visible to the user and should be on the top layer. Following is my code: <html> <head> <style type="text/css"> #wrapper { width: 910px; margin: 0 auto; padding: 0px 20px 50px 20px; text-align: left; } #footer-wrapper { -moz-background-clip:border; -moz-background-inline-policy:continuous; -moz-background-origin:padding; bottom:0; clear:both; font-size:11px !important; left:0; position:fixed; white-space:nowrap; width:100%; z-index:8000; } </style> <script> var counter = 0; function addContent(ctr) { document.getElementById(ctr).innerHTML=document.getElementById (ctr).innerHTML+" dynaContent"+counter; counter++; } </script> </head> <body> <div> <div><input type="button" onclick="addContent('wrapper')" value="Add dynaContent" /></div> <div id="wrapper" style="color:#FFFFFF; background-color: #111111;"> STATIC TEXT - HEADER-WRAPPER </div> <div style="color:#FFFFFF;background-color: #555555;">STATIC TEXT - FOOTER-WRAPPER</div> </div> </body> </html> It's working fine in Mozilla Firefox and giving the intended results, but, in IE, the footer bar always sticks just under the header. Please help. Thanks in advance, Shailesh.

    Read the article

  • MATLAB image corner coordinates & referncing to cell arrays

    - by James
    Hi, I am having some problems comparing the elements in different cell arrays. The context of this problem is that I am using the bwboundaries function in MATLAB to trace the outline of an image. The image is of a structural cross section and I am trying to find if there is continuity throughout the section (i.e. there is only one outline produced by the bwboundaries command). Having done this and found where the is more than one section traced (i.e. it is not continuous), I have used the cornermetric command to find the corners of each section. The code I have is: %% Define the structural section as a binary matrix (Image is an I-section with the web broken) bw(20:40,50:150) = 1; bw(160:180,50:150) = 1; bw(20:60,95:105) = 1; bw(140:180,95:105) = 1; Trace = bw; [B] = bwboundaries(Trace,'noholes'); %Traces the outer boundary of each section L = length(B); % Finds number of boundaries if L > 1 disp('Multiple boundaries') % States whether more than one boundary found end %% Obtain perimeter coordinates for k=1:length(B) %For all the boundaries perim = B{k}; %Obtains perimeter coordinates (as a 2D matrix) from the cell array end %% Find the corner positions C = cornermetric(bw); Areacorners = find(C == max(max(C))) % Finds the corner coordinates of each boundary [rowindexcorners,colindexcorners] = ind2sub(size(Newgeometry),Areacorners) % Convert corner coordinate indexes into subcripts, to give x & y coordinates (i.e. the same format as B gives) %% Put these corner coordinates into a cell array Cornerscellarray = cell(length(rowindexcorners),1); % Initialises cell array of zeros for i =1:numel(rowindexcorners) Cornerscellarray(i) = {[rowindexcorners(i) colindexcorners(i)]}; %Assigns the corner indicies into the cell array %This is done so the cell arrays can be compared end for k=1:length(B) %For all the boundaries found perim = B{k}; %Obtains coordinates for each perimeter Z = perim; % Initialise the matrix containing the perimeter corners Sectioncellmatrix = cell(length(rowindexcorners),1); for i =1:length(perim) Sectioncellmatrix(i) = {[perim(i,1) perim(i,2)]}; end for i = 1:length(perim) if Sectioncellmatrix(i) ~= Cornerscellarray Sectioncellmatrix(i) = []; %Gets rid of the elements that are not corners, but keeps them associated with the relevent section end end end This creates an error in the last for loop. Is there a way I can check whether each cell of the array (containing an x and y coordinate) is equal to any pair of coordinates in cornercellarray? I know it is possible with matrices to compare whether a certain element matches any of the elements in another matrix. I want to be able to do the same here, but for the pair of coordinates within the cell array. The reason I don't just use the cornercellarray cell array itself, is because this lists all the corner coordinates and does not associate them with a specific traced boundary.

    Read the article

  • Creating multiple heads in remote repository

    - by Jab
    We are looking to move our team (~10 developers) from SVN to mercurial. We are trying to figure out how to manage our workflow. In particular, we are trying to see if creating remote heads is the right solution. We currently have a very large repository with multiple, related projects. They share a lot of code, but pieces of the project are deployed by different teams (3 teams) independent of other portions of the code-base. So each team is working on concurrent large features. The way we currently handles this in SVN are branches. Team1 has a branch for Feature1, same deal for the other teams. When Team1 finishes their change, it gets merged into the trunk and deployed out. The other teams follow suite when their project is complete, merging of course. So my initial thought are using Named Branches for these situations. Team1 makes a Feature1 branch off of the default branch in Hg. Now, here is the question. Should the team PUSH that branch, in it's current/half-state to the repository. This will create a second head in the core repo. My initial reaction was "NO!" as it seems like a bad idea. Handling multiple heads on our repository just sounds awful, but there are some advantages... First, the teams want to setup Continuous Integration to build this branch during their development cycle(months long). This will only work if the CI can pull this branch from the repo. This is something we do now with SVN, copy a CI build and change the branch. Easy. Second, it makes it easier for any team member to jump onto the branch and start working. Without pushing to the core repo, they would have to receive a push from a developer on that team with the changeset information. It is also possible to lose local commits to hardware failure. The chances increase a lot if it's a branch by a single developer who has followed the "don't push until finished" approach. And lastly is just for ease of use. The developers can easily just commit and push on their branch at any time without consequence(as they do today, in their SVN branches). Is there a better way to handle this scenario that I may be missing? I just want a veteran's opinion before moving forward with the strategy. For bug fixes we like the general workflow of mecurial, anonymous branches that only consist of 1-2 commits. The simplicity is great for those cases. By the way, I've read this , great article which seems to favor Named branches.

    Read the article

  • Output from OouraFFT correct sometimes but completely false other times. Why ?

    - by Yan
    Hi I am using Ooura FFT to compute the FFT of the accelerometer data in windows of 1024 samples. The code works fine, but then for some reason it produces very strange outputs, i.e. continuous spectrum with amplitudes of the order of 10^200. Here is the code: OouraFFT *myFFT=[[OouraFFT alloc] initForSignalsOfLength:1024 NumWindows:10]; // had to allocate it UIAcceleration *tempAccel = nil; double *input=(double *)malloc(1024 * sizeof(double)); double *frequency=(double *)malloc(1024*sizeof(double)); if (input) { //NSLog(@"%d",[array count]); for (int u=0; u<[array count]; u++) { tempAccel = (UIAcceleration *)[array objectAtIndex:u]; input[u]=tempAccel.z; //NSLog(@"%g",input[u]); } } myFFT.inputData=input; // specifies input data to myFFT [myFFT calculateWelchPeriodogramWithNewSignalSegment]; // calculates FFT for (int i=0;i<myFFT.dataLength;i++) // loop to copy output of myFFT, length of spectrumData is half of input data, so copy twice { if (i<myFFT.numFrequencies) { frequency[i]=myFFT.spectrumData[i]; // } else { frequency[i]=myFFT.spectrumData[myFFT.dataLength-i]; // copy twice } } for (int i=0;i<[array count];i++) { TransformedAcceleration *NewAcceleration=[[TransformedAcceleration alloc]init]; tempAccel=(UIAcceleration*)[array objectAtIndex:i]; NewAcceleration.timestamp=tempAccel.timestamp; NewAcceleration.x=tempAccel.x; NewAcceleration.y=tempAccel.z; NewAcceleration.z=frequency[i]; [newcurrentarray addObject:NewAcceleration]; // this does not work //[self replaceAcceleration:NewAcceleration]; //[NewAcceleration release]; [NewAcceleration release]; } TransformedAcceleration *a=nil;//[[TransformedAcceleration alloc]init]; // object containing fft of x,y,z accelerations for(int i=0; i<[newcurrentarray count]; i++) { a=(TransformedAcceleration *)[newcurrentarray objectAtIndex:i]; //NSLog(@"%d,%@",i,[a printAcceleration]); fprintf(fp,[[a printAcceleration] UTF8String]); //this is going wrong somewhow } fclose(fp); [array release]; [myFFT release]; //[array removeAllObjects]; [newcurrentarray release]; free(input); free(frequency);

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • Is it possible to use CSS to align these divs/spans in a table-like manner?

    - by Justin L.
    I have <div class='line'> <div class='chord_line'> <span class='chord_block'></span> <span class='chord_block'>E</span> <span class='chord_block'>B</span> <span class='chord_block'>C#m</span> <span class='chord_block'>A</span> </div> <div class='lyric_line'> <span class='lyric_block'></span> <span class='lyric_block'>Just a</span> <span class='lyric_block'>small-town girl</span> <span class='lyric_block'>living in a</span> <span class='lyric_block'>lonely world</span> </div> </div> (Excuse me for not being too familiar with proper css conventions for when to use div/spans) I want to be able to display them so that each chord_block span and lyric_block span is aligned vertically, as if they were left-aligned and on the same row of a table. For example: E B C#m A Just a small-town girl living in a lonely world (There will often be cases where an empty chord block is matched up to non-empty lyric block, and vice-versa.) I'm completely new to using CSS to align things, and have had no real understanding/experience of CSS aside from changing background colors and link styles. Is this possible in CSS? If not, how could the div/class nesting structure be revised to make this possible? I could change the spans to divs if necessary. Some things I cannot use: I can't change the structure to group things by a chord_and_lyric_block div (and have their width stretch to the length of the lyric, and stack them horizontally), because I couldn't really copy/select the lyrical lines continuously in their entirety, which is extremely critical. I'm trying to avoid a table-like solution, because this data is not tabular at all. The chord line and the lyric line are meant to be read as one continuous line, not a set of cells. Also, apart from the design philosophy reasons, I think it might have the same problems as the previous thing bullet point. If this is possible, what div/span attributes should I be using? Can you provide sample css? If this is not possible, can it be done with javascript?

    Read the article

  • MySQL Query to find consecutive available times of variable lenth

    - by Armaconn
    I have an events table that has user_id, date ('2013-10-01'), time ('04:15:00'), and status_id; What I am looking to find is a solution similar to http://stackoverflow.com/questions/2665574/find-consecutive-rows-calculate-duration but I need I need two additional components: 1) Take date into consideration, so 10/1/2013 at 11:00 PM - 10/2/2013 at 3:00AM. Feel free to just put in a fake date range (like '2013-10-01' to '2013-10-31') 2) Limit output to only include when there are 4+ consecutive times (each event is 15 minutes and I want it to display minimum blocks of an hour, but would also like to be able to switch this restriction to 1.5 hours or some other duration if possible). SUMMARY - Looking for a query that provides the start and end times for a set of events that have the same user_id, status_id, and are in a continuous series based on date and time. For which I can restrict results based on date range and minimum series duration. So the output should have: user_id, date_start, time_start, date_end, time_end, status_id, duration CREATE TABLE `events` ( `event_id` int(11) NOT NULL auto_increment COMMENT 'ID', `user_id` int(11) NOT NULL, `date` date NOT NULL, `time` time NOT NULL, `status_id` int(11) default NULL, PRIMARY KEY (`event_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1568 ; INSERT INTO `events` VALUES(1, 101, '2013-08-14', '23:00:00', 2); INSERT INTO `events` VALUES(2, 101, '2013-08-14', '23:15:00', 2); INSERT INTO `events` VALUES(3, 101, '2013-08-14', '23:30:00', 2); INSERT INTO `events` VALUES(4, 101, '2013-08-14', '23:45:00', 2); INSERT INTO `events` VALUES(5, 101, '2013-08-15', '00:00:00', 2); INSERT INTO `events` VALUES(6, 101, '2013-08-15', '00:15:00', 1); INSERT INTO `events` VALUES(7, 500, '2013-08-14', '23:45:00', 1); INSERT INTO `events` VALUES(8, 500, '2013-08-15', '00:00:00', 1); INSERT INTO `events` VALUES(9, 500, '2013-08-15', '00:15:00', 2); INSERT INTO `events` VALUES(10, 500, '2013-08-15', '00:30:00', 2); INSERT INTO `events` VALUES(11, 500, '2013-08-15', '00:45:00', 1); Desired output row |user_id | date_start | time_start | date_end | time_end | status_id | duration 1 |101 |'2013-08-14'| '23:00:00' |'2013-08-15'|'00:15:00'| 2 | 5 2 |101 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:30:00'| 1 | 1 3 |500 |'2013-08-14'| '00:23:45' |'2013-08-15'|'00:15:00'| 1 | 2 4 |500 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:45:00'| 2 | 2 5 |500 |'2013-08-15'| '00:00:45' |'2013-08-15'|'01:00:00'| 2 | 1 *except that rows 2 and 5 wouldn't appear if duration had to be greater than 30 minutes Thanks for any help that you can provide! And please let me know if there is anything I can further clarify!!

    Read the article

  • Finding open contiguous blocks of time for every day of a month, fast

    - by Chris
    I am working on a booking availability system for a group of several venues, and am having a hard time generating the availability of time blocks for days in a given month. This is happening server-side in PHP, but the concept itself is language agnostic -- I could be doing this in JS or anything else. Given a venue_id, month, and year (6/2012 for example), I have a list of all events occurring in that range at that venue, represented as unix timestamps start and end. This data comes from the database. I need to establish what, if any, contiguous block of time of a minimum length (different per venue) exist on each day. For example, on 6/1 I have an event between 2:00pm and 7:00pm. The minimum time is 5 hours, so there's a block open there from 9am - 2pm and another between 7pm and 12pm. This would continue for the 2nd, 3rd, etc... every day of June. Some (most) of the days have nothing happening at all, some have 1 - 3 events. The solution I came up with works, but it also takes waaaay too long to generate the data. Basically, I loop every day of the month and create an array of timestamps for each 15 minutes of that day. Then, I loop the time spans of events from that day by 15 minutes, marking any "taken" timeslot as false. Remaining, I have an array that contains timestamp of free time vs. taken time: //one day's array after processing through loops (not real timestamps) array( 12345678=>12345678, // <--- avail 12345878=>12345878, 12346078=>12346078, 12346278=>false, // <--- not avail 12346478=>false, 12346678=>false, 12346878=>false, 12347078=>12347078, // <--- avail 12347278=>12347278 ) Now I would need to loop THIS array to find continuous time blocks, then check to see if they are long enough (each venue has a minimum), and if so then establish the descriptive text for their start and end (i.e. 9am - 2pm). WHEW! By the time all this looping is done, the user has grown bored and wandered off to Youtube to watch videos of puppies; it takes ages to so examine 30 or so days. Is there a faster way to solve this issue? To summarize the problem, given time ranges t1 and t2 on day d, how can I determine the remaining time left in d that is longer than the minimum time block m. This data is assembled on demand via AJAX as the user moves between calendar months. Results are cached per-page-load, so if the user goes to July a second time, the data that was generated the first time would be reused. Any other details that would help, let me know. Edit Per request, the database structure (or the part that is relevant here) *events* id (bigint) title (varchar) *event_times* id (bigint) event_id (bigint) venue_id (bigint) start (bigint) end (bigint) *venues* id (bigint) name (varchar) min_block (int) min_start (varchar) max_start (varchar)

    Read the article

  • Is it possible to start (and stop) a thread inside a DLL?

    - by Jerry Dodge
    I'm pondering some ideas for building a DLL for some common stuff I do. One thing I'd like to check if it's possible is running a thread inside of a DLL. I'm sure I would be able to at least start it, and have it automatically free on terminate (and make it forcefully terminate its self) - that I can see wouldn't be much of a problem. But once I start it, I don't see how I can continue communicating with it (especially to stop it) mainly because each call to the DLL is unique (as far as my knowledge tells me) but I also know very little of the subject. I've seen how in some occasions, a DLL can be loaded at the beginning and released at the end when it's not needed anymore. I have 0 knowledge or experience with this method, other than just seeing something related to it, couldn't even tell you what or how, I don't remember. But is this even possible? I know about ActiveX/COM but that is not what I want - I'd like just a basic DLL that can be used across languages (specifically C#). Also, if it is possible, then how would I go about doing callbacks from the DLL to the app? For example, when I start the thread, I most probably will assign a function (which is inside the EXE) to be the handler for the events (which are triggered from the DLL). So I guess what I'm asking is - how to load a DLL for continuous work and release it when I'm done - as opposed to the simple method of calling individual functions in the DLL as needed. In the same case - I might assign variables or create objects inside the DLL. How can I assure that once I assign that variable (or create the object), how can I make sure that variable or object will still be available the next time I call the DLL? Obviously it would require a mechanism to Initialize/Finalize the DLL (I.E. create the objects inside the DLL when the DLL is loaded, and free the objects when the DLL is unloaded). EDIT: In the end, I will wrap the DLL inside of a component, so when an instance of the component is created, DLL will be loaded and a corresponding thread will be created inside the DLL, then when the component is free'd, the DLL is unloaded. Also need to make sure that if there are for example 2 of these components, that there will be 2 instances of the DLL loaded for each component. Is this in any way related to the use of an IInterface? Because I also have 0 experience with this. No need to answer it directly with sample source code - a link to a good tutorial would be great.

    Read the article

  • Webcast Q&A: Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter

    - by Kellsey Ruppel
    This week we had the fifth webcast in our WebCenter in Action webcast series, "Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter", where customers Giovani Dacumos and Minh Ong from the Los Angeles Department of Building & Safety (LADBS), and Sheetal Paranjpye and Rajiv Desai from Oracle Partner 3Di, shared how Oracle WebCenter is powering LADBS' externally facing website and providing a superior self-service experience for their customers. We asked the speakers to provide some dialogue for Q&A.   Giovani Dacumos, Director of Systems and Minh Ong, LADBS Q: Did you run into any issues when integrating all of the different applications together?A: Yes. We did have issues integrating a secure sign on between the portal and other legacy applications. We used portlets and iframes to overcome those.  This is a new technology for us and we are also learning as we go so there were a lot of challenges in developing and implementing our vision. Q: What has been the biggest benefit your end users have seen?A: The biggest benefit for our ends users is ease-of-use. We've given them a system that provided a new and improved source of information, as well as a very organized flow of transaction processing. It has made our online service very user friendly. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: There was no internal resistance during the implementation, only challenges. As mentioned earlier, this is a new technology for us. We've come across issues that needed assistance from Oracle. Working with 3Di and Oracle has helped us tremendously to find solutions to our implementation issues. Q: Given the performance, what do you estimate to be the top end capacity of the system? A: With the current performance and architecture we have, we are able to support approx 300-400 concurrent users.  We would need more hardware to support additional user load. Q: What's the overview or summary of feedback from the users interacting with the site?A: LADBS has a wide spectrum of customers, from simple users like homeowners to large construction firms. Anything new that we offer could be a little bit challenging for some, but overall, the customers liked it. They saw a huge improvement on the usability. Q: Can you describe the impressions about the site before and after the project within LADBS?A: The old site was using old technology and it was hard for us to keep on building into it as we got more business requirements. It made our application seem a bit complicated.  It was confusing for our new customers to use and we've improved on this with the new site. It's now easier for them to complete their transactions and, at the same time, allowed us to provide more useful information. Sheetal Paranjpye and Rajiv Desai, 3Di Q: Did you run into any obstacles when implementing the solution?A: Yes we did run into some obstacles. One of the key show stoppers was the issue with portlet to portal communication. The GIS viewer (portlet) needed information to be passed  to and from Permit LA (Portal), but we were able to get everything configured and up and working quickly! Q: Was there a lot of custom work that needed to be done for this particular solution?A: We have done some customizations where workflows/ Task flows are involved.  Q: What do you think were the keys to success for rolling out WebCenter?A: Having a service oriented architecture and using portlets have been the key areas for rolling out Oracle WebCenter at LADBS. The Oracle WebCenter Content integration allows the flexibility to business users to maintain the content, which has really cut down on the reliance of IT, and employee productivity has increased as a result. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action! Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Hue, saturation, brightness, contrast effect in hlsl

    - by Vibhore Tanwer
    I am new to pixel shader, and I am trying to write a simple brightness, contrast, hue, saturation effect. I have written a shader for it but I doubt that my shader is not providing me correct result, Brightness, contrast, saturation is working fine, problem is with hue. if I apply hue between -1 to 1, it seems to be working fine, but to make things more sharp, I need to apply hue value between -180 and 180, like we can apply hue in Paint.NET. Here is my code. // Amount to shift the Hue, range 0 to 6 float Hue; float Brightness; float Contrast; float Saturation; float Alpha; sampler Samp : register(S0); // Converts the rgb value to hsv, where H's range is -1 to 5 float3 rgb_to_hsv(float3 RGB) { float r = RGB.x; float g = RGB.y; float b = RGB.z; float minChannel = min(r, min(g, b)); float maxChannel = max(r, max(g, b)); float h = 0; float s = 0; float v = maxChannel; float delta = maxChannel - minChannel; if (delta != 0) { s = delta / v; if (r == v) h = (g - b) / delta; else if (g == v) h = 2 + (b - r) / delta; else if (b == v) h = 4 + (r - g) / delta; } return float3(h, s, v); } float3 hsv_to_rgb(float3 HSV) { float3 RGB = HSV.z; float h = HSV.x; float s = HSV.y; float v = HSV.z; float i = floor(h); float f = h - i; float p = (1.0 - s); float q = (1.0 - s * f); float t = (1.0 - s * (1 - f)); if (i == 0) { RGB = float3(1, t, p); } else if (i == 1) { RGB = float3(q, 1, p); } else if (i == 2) { RGB = float3(p, 1, t); } else if (i == 3) { RGB = float3(p, q, 1); } else if (i == 4) { RGB = float3(t, p, 1); } else /* i == -1 */ { RGB = float3(1, p, q); } RGB *= v; return RGB; } float4 mainPS(float2 uv : TEXCOORD) : COLOR { float4 col = tex2D(Samp, uv); float3 hsv = rgb_to_hsv(col.xyz); hsv.x += Hue; // Put the hue back to the -1 to 5 range //if (hsv.x > 5) { hsv.x -= 6.0; } hsv = hsv_to_rgb(hsv); float4 newColor = float4(hsv,col.w); float4 colorWithBrightnessAndContrast = newColor; colorWithBrightnessAndContrast.rgb /= colorWithBrightnessAndContrast.a; colorWithBrightnessAndContrast.rgb = colorWithBrightnessAndContrast.rgb + Brightness; colorWithBrightnessAndContrast.rgb = ((colorWithBrightnessAndContrast.rgb - 0.5f) * max(Contrast + 1.0, 0)) + 0.5f; colorWithBrightnessAndContrast.rgb *= colorWithBrightnessAndContrast.a; float greyscale = dot(colorWithBrightnessAndContrast.rgb, float3(0.3, 0.59, 0.11)); colorWithBrightnessAndContrast.rgb = lerp(greyscale, colorWithBrightnessAndContrast.rgb, col.a * (Saturation + 1.0)); return colorWithBrightnessAndContrast; } technique TransformTexture { pass p0 { PixelShader = compile ps_2_0 mainPS(); } } Please If anyone can help me learning what am I doing wrong or any suggestions? Any help will be of great value. EDIT: Images of the effect at hue 180: On the left hand side, the effect I got with @teodron answer. On the right hand side, The effect Paint.NET gives and I'm trying to reproduce.

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >