Search Results

Search found 11991 results on 480 pages for 'cedric copy'.

Page 408/480 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Some images fails to load on Windows Server 2008

    - by Guffa
    I have an application running on a Windows Server 2008, that is processing uploaded images. Currently it successfully processes about 8000 images per day, creating 11 different sizes of each image. The problem that I have is that sometimes the application fails to load some images, I get the error "System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.". The upload only accepts files with a JPEG extension (jpg/jpeg/jpe) or with a JPEG MIME type, and from what I can tell those images are really JPEG images. If I look at the image file in windows explorer on the server, it can successfully extract the thumbnail from the file, but if I try to open it, I get the error message "This is not a valid bitmap file, or it's format is not currently supported." from Paint. If I copy the image to my own computer, running Windows 7, there is no problem opening the image. It works in Paint, Windows Photo Viewer, Adobe Bridge, and Photoshop. If I try to load the image using Image.FromStream the same way as in the application running on the server, it loads just fine. (I have copied the file back to the server, and it still doesn't work, so there is nothing in the copying process that changes it.) When I look at the image information in Bridge, I see that the images are created using Picasa 3.0, but other than that I can't see anything special about them. I haven't yet found anyone having the same problem, or any known problems like this with the Picasa application. Has anyone had any similar problem, or know if there is something special about images created using Picasa? Is there any image codec that needs installing on the server to handle all kinds of JPEG images?

    Read the article

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • MySQL to PostreSQL and Named Scope

    - by Lowgain
    I've got a named scope for one of my models that works fine. The code is: named_scope :inbox_threads, lambda { |user| { :include => [:deletion_flags, :recipiences], :conditions => ["recipiences.user_id = ? AND deletion_flags.user_id IS NULL", user.id], :group => "msg_threads.id" }} This works fine on my local copy of the app with a MySQL database, but when I push my app to Heroku (which only uses PostgreSQL), I get the following error: ActiveRecord::StatementInvalid (PGError: ERROR: column "msg_threads.subject" must appear in the GROUP BY clause or be used in an aggregate function: SELECT "msg_threads"."id" AS t0_r0, "msg_threads"."subject" AS t0_r1, "msg_threads"."originator_id" AS t0_r2, "msg_thr eads"."created_at" AS t0_r3, "msg_threads"."updated_at" AS t0_r4, "msg_threads"."url_key" AS t0_r5, "deletion_flags"."id" AS t1_r0, "deletion_flags"."user_id" AS t1_r1, "deletion_flags"."msg_thread_id" AS t1_r2, "deletion_flags"."confirmed" AS t1_r3, "deletion_flags"."created_at" AS t1_r4, "deletion_flags"."updated_at" AS t1_r5, "recipiences"."id" AS t2_r0, "recipiences"."user_id" AS t2_r1, "recipiences"."msg_thread_id" AS t2_r2, "recipiences"."created_at" AS t2_r3, "recipien ces"."updated_at" AS t2_r4 FROM "msg_threads" LEFT OUTER JOIN "deletion_flags" ON deletion_flags.msg_thread_id = msg_threads.id LEFT OUTER JOIN "recipiences" ON recipiences.msg_thread_id = msg_threads.id WHERE (recipiences.user_id = 1 AND deletion_flags.user_id IS NULL) GROUP BY msg_threads.id) I'm not as familiar with the working of Postgres, so what would I need to add here to get this working? Thanks!

    Read the article

  • Can I use encrypt web.config with a custom protection provider who's assembly is not in the GAC?

    - by James
    I have written a custom protected configuration provider for my web.config. When I try to encrypt my web.config with it I get the following error from aspnet_iisreg aspnet_regiis.exe -pef appSettings . -prov CustomProvider (This is running in my MSBuild) Could not load file or assembly 'MyCustomProviderNamespace' or one of its dependencies. The system cannot find the file specified. After checking with the Fusion log, I confirm it is checking both the GAC, and 'C:/WINNT/Microsoft.NET/Framework/v2.0.50727/' (the location of aspnet_iisreg). But it cannot find the provider. I do not want to move my component into the GAC, I want to leave the custom assembly in my ApplicationBase to copy around to various servers without having to pull/push from the GAC. Here is my provider configuration in the web.config. <configProtectedData> <providers> <add name="CustomProvider" type="MyCustomProviderNamespace.MyCustomProviderClass, MyCustomProviderNamespace" /> </providers> </configProtectedData> Has anyone got any ideas?

    Read the article

  • Issue with creating index organized table

    - by mtim
    I'm having a weird problem with index organized table. I'm running Oracle 11g standard. i have a table src_table SQL> desc src_table; Name Null? Type --------------- -------- ---------------------------- ID NOT NULL NUMBER(16) HASH NOT NULL NUMBER(3) ........ SQL> select count(*) from src_table; COUNT(*) ---------- 21108244 now let's create another table and copy 2 columns from src_table set timing on SQL> create table dest_table(id number(16), hash number(20), type number(1)); Table created. Elapsed: 00:00:00.01 SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table; 21108244 rows created. Elapsed: 00:00:15.25 SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE)); Table altered. Elapsed: 00:01:17.35 It took Oracle < 2 min. now same exercise but with IOT table SQL> CREATE TABLE dest_table_iot ( id NUMBER(16) NOT NULL, hash NUMBER(20) NOT NULL, type NUMBER(1) NOT NULL, CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE) ) ORGANIZATION INDEX; Table created. Elapsed: 00:00:00.03 SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE) SELECT HASH, id, 1 FROM src_table; "insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results. What is going on here ? Why is it taking so long ?

    Read the article

  • Are certain open-source licenses more suitable than others for career growth?

    - by Francisco Garcia
    As a software engineer/programmer myself, I love the possibility to download the code and learn from it. However building software is what brings food to my table. I have doubts regarding the type of license I should use for my own personal projects or when picking up one project to learn from. There are already many questions about licenses on Stackoverflow, but I would like to make this one much more specific. If your main profession and way of living is building software: which type of license do you find more useful for you? And I mean, the license that can benefit you most as a professional because it gives you more freedom to reuse the experience you gain. GPL is a great license to build communities because it forces you to give back your work. However I like BSD licenses because of their extra freedom. I know that if the code I am exploring is BSD licensed, I might be able to expand not only my skills, but also my programmer toolbox. Whenever I am working for a company, I might recall that something similar was done in another project and I will be able to copy or imitate certain part of the code. I know that there are religious wars regarding GPL vs BSD and it is not my intention to start one. Probably many companies already take snipsets from GPL projects anyway. I just want to insist in the factor of professional enrichment. I do not intend to discriminate any license. I said I prefer BSD licenses but I also use Linux because the user base is bigger and also the market demand.

    Read the article

  • Access denied from another thread

    - by Lobuno
    Hello! In a program I span a thread ("the working thread"). Hera I copy some files write some data to a database and eventually, delete some other files or directories. Everything works fine. The problem is now, that I decided to move the deleting operation to some other thread. So the working thread now copies the files or directories, writes to the database, and , if there is a need to delete some other files this thread spans another thread and that second thread deleted the needed files or directories. The problem is that,the deletion used to work 100% when done in the working thread, now when the same is done in the secondary thread, I sometimes get an "Access denied" error and the files cannot be deleted. And no, the working thread is definitely NOT acceding the files and directories to delete at this moment. Sometimes (but not always) the main thread is impersonating some user, so if needed , the deleting thread is also running under impersonation just to grant the needed permissions to delete the files, so that should not be the problem. Anybody has a clue why this could be happening?

    Read the article

  • Can't find momd file: Core Data problems

    - by thekevinscott
    Aw geez! I screwed something up! I'm a Core Data noob, working on my first iOS app. After much Stack Overflowing I'm using this code: NSString *path = [[NSBundle mainBundle] pathForResource:@"CoreData" ofType:@"momd"]; if (!path) { path = [[NSBundle mainBundle] pathForResource:@"CoreData" ofType:@"mom"]; } NSAssert(path != nil, @"Unable to find Resource in main bundle"); CoreData is the name of my app. I've tried to put in initial data into the app by finding the path to the sqlite file in my iPhone simulator, and then going and inserting into that sqlite file. But at some point, I moved the sqlite (thinking it would create a fresh copy), deleted the app from the simulator, and the sqlite file is gone. I'm not sure if I'm leaving out some part of the process (this was a few hours ago) but the end result is that everything is screwed up. How do I resubstantiate this sqlite / momd file? "Clean" and "Clean all targets" are grayed out. I'm happy to post the relevant code from my app that would help shed some light on this problem but there's tons of code relating to Core Data which I don't understand, so I'm not sure what part to post! Any help is greatly appreciated.

    Read the article

  • How to filter node list based on the contents of another node list

    - by ~otakuj462
    Hi, I'd like to use XSLT to filter a node list based on the contents of another node list. Specifically, I'd like to filter a node list such that elements with identical id attributes are eliminated from the resulting node list. Priority should be given to one of the two node lists. The way I originally imagined implementing this was to do something like this: <xsl:variable name="filteredList1" select="$list1[not($list2[@id_from_list1 = @id_from_list2])]"/> The problem is that the context node changes in the predicate for $list2, so I don't have access to attribute @id_from_list1. Due to these scoping constraints, it's not clear to me how I would be able to refer to an attribute from the outer node list using nested predicates in this fashion. To get around the issue of the context node, I've tried to create a solution involving a for-each loop, like the following: <xsl:variable name="filteredList1"> <xsl:for-each select="$list1"> <xsl:variable name="id_from_list1" select="@id_from_list1"/> <xsl:if test="not($list2[@id_from_list2 = $id_from_list1])"> <xsl:copy-of select="."/> </xsl:if> </xsl:for-each> </xsl:variable> But this doesn't work correctly. It's also not clear to me how it fails... Using the above technique, filteredList1 has a length of 1, but appears to be empty. It's strange behaviour, and anyhow, I feel there must be a more elegant approach. I'd appreciate any guidance anyone can offer. Thanks.

    Read the article

  • MAC : How to check if the file is still being copied in cpp?

    - by Peda Pola
    In my current project, we had a requirement to check if the file is still copying. We have already developed a library which will give us OS notification like file_added , file_removed , file_modified, file_renamed on a particular folder along with the corresponding file path. The problem here is that, lets say if you add 1 GB file, it is giving multiple notification such as file_added , file_modified, file_modified as the file is being copied. Now i decided to surpass these notifications by checking if the file is copying or not. Based on that i will ignore events. I have written below code which tells if the file is being copied or not which takes file path as input parameter. Details:- In Mac, while file is being copied the creation date is set as some date less than 1970. Once it is copied the date is set to current date. Am using this technique. Based on this am deciding that file is being copied. Problem:- when we copy file in the terminal it is failing. Please advice me any approach. bool isBeingCopied(const boost::filesystem::path &filePath) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; bool isBeingCopied = false; if([[[[NSFileManager defaultManager] attributesOfItemAtPath:[NSString stringWithUTF8String:filePath.string().c_str()] error:nil] fileCreationDate] timeIntervalSince1970] < 0) { isBeingCopied = true; } [pool release]; return isBeingCopied; }

    Read the article

  • Should I be using Expression Blend to design really dynamic UIs?

    - by Robert Rossney
    My company's product is, at its core, a framework for developing metadata-driven UIs. I don't know how to characterize it less succinctly than that, and hope I won't need to for purposes of this question, but we'll see. I've been trying to come up to speed on WPF, and have been building UI prototypes here and there, and recently I decided to see if I could use Expression Blend to help with the design of these UIs. And I'm pretty mystified at this point. It appears to me as though Expresssion Blend is designed with the expectation that you already know all of the objects that are going to be present in the UI at design time. But our program generates these object dynamically at runtime. For instance, a data row might be presented in a horizontal StackPanel containing alternating TextBlocks (for captions) and TextBoxes (for data fields). The number of these objects depends on metadata about the number of columns in the data row. I can, pretty readily, write code that runs through a metadata record and populates a StackPanel dynamically, setting up the binding of all of the controls to properties in either the data or metadata. (A TextBox's Width might be bound to metadata, while its Text is bound to data.) But I can't even begin to figure out how to do something like this in Expression Blend. I can manually create all these controls, so that I have a set of controls that I can apply styles to and work out the visual design of the app, but it's really a pain to do this. I can write code that goes through my data model and emits XAML for all these controls, I suppose, and then copy and paste it. But I'm going to feel really stupid if it turns out there a way to do this sort of thing in Expression Blend and I've dropped back and punted because I'm too dim to figure out the right way to think of it. Is this enough information for someone to try formulating an answer?

    Read the article

  • Eliminate full table scan due to BETWEEN (and GROUP BY)

    - by Dave Jarvis
    Description According to the explain command, there is a range that is causing a query to perform a full table scan (160k rows). How do I keep the range condition and reduce the scanning? I expect the culprit to be: Y.YEAR BETWEEN 1900 AND 2009 AND Code Here is the code that has the range condition (the STATION_DISTRICT is likely superfluous). SELECT COUNT(1) as MEASUREMENTS, AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, STATION_DISTRICT SD, YEAR_REF Y FORCE INDEX(YEAR_IDX), MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 10663 AND -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(C.LATITUDE_DECIMAL - S.LATITUDE_DECIMAL), 2) + (COS(RADIANS(C.LATITUDE_DECIMAL + S.LATITUDE_DECIMAL) / 2) * POW(RADIANS(C.LONGITUDE_DECIMAL - S.LONGITUDE_DECIMAL), 2)) ) <= 50 AND -- Get the station district identification for the matching station. -- S.STATION_DISTRICT_ID = SD.ID AND -- Gather all known years for that station ... -- Y.STATION_DISTRICT_ID = SD.ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '003' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Update The SQL is performing a full table scan, which results in MySQL performing a "copy to tmp table", as shown here: +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | 1 | SIMPLE | C | const | PRIMARY | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | Y | range | YEAR_IDX | YEAR_IDX | 4 | NULL | 160422 | Using where | | 1 | SIMPLE | SD | eq_ref | PRIMARY | PRIMARY | 4 | climate.Y.STATION_DISTRICT_ID | 1 | Using index | | 1 | SIMPLE | S | eq_ref | PRIMARY | PRIMARY | 4 | climate.SD.ID | 1 | Using where | | 1 | SIMPLE | M | ref | PRIMARY,YEAR_REF_IDX,CATEGORY_IDX | YEAR_REF_IDX | 8 | climate.Y.ID | 54 | Using where | | 1 | SIMPLE | D | ref | INDEX | INDEX | 8 | climate.M.ID | 11 | Using where | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ Related http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html http://dev.mysql.com/doc/refman/5.0/en/where-optimizations.html http://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause Thank you!

    Read the article

  • Ant path/pathelement not expanding properties correctly

    - by Jonas Byström
    My property gwt.sdk expands just fine everywhere else, but not inside a path/pathelement: <target name="setup.gwtenv"> <property environment="env"/> <condition property="gwt.sdk" value="${env.GWT_SDK}"> <isset property="env.GWT_SDK" /> </condition> <property name="gwt.sdk" value="/usr/local/gwt" /> <!-- Default value. --> </target> <path id="project.class.path"> <pathelement location="${gwt.sdk}/gwt-user.jar"/> </path> <target name="libs" depends="setup.gwtenv" description="Copy libs to WEB-INF/lib"> </target> <target name="javac" depends="libs" description="Compile java source"> <javac srcdir="src" includes="**" encoding="utf-8" destdir="war/WEB-INF/classes" source="1.5" target="1.5" nowarn="true" debug="true" debuglevel="lines,vars,source"> <classpath refid="project.class.path"/> </javac> </target> For instance, placing an echo of ${gwt.sdk} just above works, but not inside "project.class.path". Is it a bug, or do I have to do something that I'm not? Edit: I tried moving the property out from target setup.gwtenv into "global space", that helped circumvent the problem.

    Read the article

  • RPC command to initiate a software install

    - by ericmayo
    I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products. The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite. Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software. I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished. My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install. What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message. To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program. My programming language is C/C++ Any thoughts/suggestions appreciated.

    Read the article

  • ASP.Net MVC - how can I easily serialize query results to a database?

    - by Mortanis
    I've been working on a little property search engine while I learn ASP.Net MVC. I've gotten the results from various property database tables and sorted them into a master generic property response. The search form is passed via Model Binding and works great. Now, I'd like to add pagination. I'm returning the chunk of properties for the current page with .Skip() and .Take(), and that's working great. I have a SearchResults model that has the paged result set and various other data like nextPage and prevPage. Except, I no longer have the original form of course to pass to /Results/2. Previously I'd have just hidden a copy of the form and done a POST each time, but it seems inelegant. I'd like to serialize the results to my MS SQL database and return a unique key for that results set - this also helps with a "Send this query to a friend!" link. Killing two birds with one stone. Is there an easy way to take an IQueryable result set that I have, serialize it, stick it into the DB, return a unique key and then reverse the process with said key? I'm using Linq to SQL currently on a MS SQL Express install, though in production it'll be on MS SQL 2008.

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • How is this Perl code selecting two different elements from an array?

    - by Mike
    I have inherited some code from a guy whose favorite past time was to shorten every line to its absolute minimum (and sometimes only to make it look cool). His code is hard to understand but I managed to understand (and rewrite) most of it. Now I have stumbled on a piece of code which, no matter how hard I try, I cannot understand. my @heads = grep {s/\.txt$//} OSA::Fast::IO::Ls->ls($SysKey,'fo','osr/tiparlo',qr{^\d+\.txt$}) || (); my @selected_heads = (); for my $i (0..1) { $selected_heads[$i] = int rand scalar @heads; for my $j (0..@heads-1) { last if (!grep $j eq $_, @selected_heads[0..$i-1]); $selected_heads[$i] = ($selected_heads[$i] + 1) % @heads; #WTF? } my $head_nr = sprintf "%04d", $i; OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].txt","$recdir/heads/$head_nr.txt"); OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].cache","$recdir/heads/$head_nr.cache"); } From what I can understand, this is supposed to be some kind of randomizer, but I never saw a more complex way to achieve randomness. Or are my assumptions wrong? At least, that's what this code is supposed to do. Select 2 random files and copy them. === NOTES === The OSA Framework is a Framework of our own. They are named after their UNIX counterparts and do some basic testing so that the application does not need to bother with that.

    Read the article

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • SQL Binary Microsoft Access - Combining two tables if specific field values are equal

    - by Jordan
    I am new to Microsoft Access and SQL but have a decent programming background and I believe this problem should be relatively simple. I have two tables that I have imported into Access. I will give you a little context. One table is huge and contains generic, global data. The other table is still big but contains specific, regional data. There is only one common field (or column) between the two tables. Let’s call this common field CF. The other fields in both tables are different. I’ll take you through one iteration of what I need to do. I need to take each CF value in the regional, smaller table and find the common CF value in the larger, global table. After finding the match, I need to take the whole “record” or “row” from the global data and copy it over to the corresponding record in the smaller regional table (This should involve creating the new fields). I need to do this for all CF values in the regional, smaller table. I was recommended to use SQL and a binary search, but I am unfamiliar. Let me know if you have any questions. I appreciate the help!

    Read the article

  • Fixing merge conflicts?

    - by user291701
    I have two remote branches, "grape" and "master". I'm currently on "grape". Now I switch to "master": git checkout master Now I want to pull all changes from "grape" into "master" - is this the way to do it?: git merge origin grape It's my understanding that git will then pull all the current state of the remote branch "grape" into my local copy of "master". It will try to auto-merge for me. If there are conflicts, the files in conflict will have some conflict text actually injected into the file. I then have to go into those files, and delete the chunk I don't want (essentially telling git how to merge these files). For each file in conflict, do I add and commit the changes again?: git add problemfile1.txt git commit -m "Fixed merge conflict." git add problemfile2.txt git commit -m "Fixed another merge conflict." ... after I've fixed all the merge conflicts like above, do I just push to "master" again to finish up the process?: git push origin master or is there something else we need to do when we get into this conflict state? Thank you

    Read the article

  • Redirect requests only if the file is not found?

    - by ZenBlender
    I'm hoping there is a way to do this with mod_rewrite and Apache, but maybe there is another way to consider too. On my site, I have directories set up for re-skinned versions of the site for clients. If the web root is /home/blah/www, a client directory would be /home/blah/www/clients/abc. When you access the client directory via a web browser, I want it to use any requested files in the client directory if they exist. Otherwise, I want it to use the file in the web root. For example, let's say the client does not need their own index.html. Therefore, some code would determine that there is no index.html in /home/blah/www/clients/abc and will instead use the one in /home/blah/www. Keep in mind that I don't want to redirect the client to the web root at any time, I just want to use the web root's file with that name if the client directory has not specified its own copy. The web browser should still point to /clients/abc whether the file exists there or in the root. Likewise, if there is a request for news.html in the client directory and it DOES exist there, then just serve that file instead of the web root's news.html. The user's experience should be seamless. I need this to work for requests on any filename. If I need to, for example, add a new line to .htaccess for every file I might want to redirect, it rather defeats the purpose as there is too much maintenance needed, and a good chance for errors given the large number of files. In your examples, please indicate whether your code goes in the .htaccess file in the client directory, or the web root. Web root is preferred. Thanks for any suggestions! :)

    Read the article

  • Nested functions not allowed in drawrect problem

    - by Martin
    I have a custom view onto which I draw some graphics from the drawrect function, which works fine. However I like to draw based on the contens of an array I pass on the the view just before I do a setNeedsDisplay. In the drawRect function I try to access the array but then I get a nested functions error which I do not understand. Here's my code: // // MyView.h // window // #import <UIKit/UIKit.h> @interface MyView : UIView { NSArray * nary; } @property (nonatomic, copy) NSArray *nary; @end // // MyView.m // window // #import "MyView.h" @implementation MyView @synthesize nary; CGContextRef c; CGFloat black[4] = {0.0f, 0.0f, 0.0f, 1.0f}; - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } - (void)drawRect:(CGRect)rect { c = UIGraphicsGetCurrentContext(); CGContextSetStrokeColor(c, black); NSLog(@"mview"); NSArray *ns = [nary objectAtIndex:0]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Using std::ifstream to load in an array of struct data type into a std::vector

    - by Sent1nel
    I am working on a bitmap loader in C++ and when moving from the C style array to the std::vector I have run into an usual problem of which Google does not seem to have the answer. 8 Bit and 4 bit, bitmaps contain a colour palette. The colour palette has blue, green, red and reserved components each 1 byte in size. // Colour palette struct BGRQuad { UInt8 blue; UInt8 green; UInt8 red; UInt8 reserved; }; The problem I am having is when I create a vector of the BGRQuad structure I can no longer use the ifstream read function to load data from the file directly into the BGRQuad vector. // This code throws an assert failure! std::vecotr quads; if (coloursUsed) // colour table available { // read in the colours quads.reserve(coloursUsed); inFile.read( reinterpret_cast(&quads[0]), coloursUsed * sizeof(BGRQuad) ); } Does anyone know how to read directly into the vector without having to create a C array and copy data into the BGRQuad vector?

    Read the article

  • can I put my sqlite connection and cursor in a function?

    - by steini
    I was thinking I'd try to make my sqlite db connection a function instead of copy/pasting the ~6 lines needed to connect and execute a query all over the place. I'd like to make it versatile so I can use the same function for create/select/insert/etc... Below is what I have tried. The 'INSERT' and 'CREATE TABLE' queries are working, but if I do a 'SELECT' query, how can I work with the values it fetches outside of the function? Usually I'd like to print the values it fetches and also do other things with them. When I do it like below I get an error Traceback (most recent call last): File "C:\Users\steini\Desktop\py\database\test3.py", line 15, in <module> for row in connection('testdb45.db', "select * from users"): ProgrammingError: Cannot operate on a closed database. So I guess the connection needs to be open so I can get the values from the cursor, but I need to close it so the file isn't always locked. Here's my testing code: import sqlite3 def connection (db, arg): conn = sqlite3.connect(db) conn.execute('pragma foreign_keys = on') cur = conn.cursor() cur.execute(arg) conn.commit() conn.close() return cur connection('testdb.db', "create table users ('user', 'email')") connection('testdb.db', "insert into users ('user', 'email') values ('joey', 'foo@bar')") for row in connection('testdb45.db', "select * from users"): print row How can I make this work?

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >