Search Results

Search found 2061 results on 83 pages for 'james ward'.

Page 59/83 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How do I use Ruby metaprogramming to refactor this common code?

    - by James Wenton
    I inherited a project with a lot of badly-written Rake tasks that I need to clean up a bit. Because the Rakefiles are enormous and often prone to bizarre nonsensical dependencies, I'm simplifying and isolating things a bit by refactoring everything to classes. Specifically, that pattern is the following: namespace :foobar do desc "Frozz the foobar." task :frozzify do unless Rake.application.lookup('_frozzify') require 'tasks/foobar' Foobar.new.frozzify end Rake.application['_frozzify'].invoke end # Above pattern repeats many times. end # Several namespaces, each with tasks that follow this pattern. In tasks/foobar.rb, I have something that looks like this: class Foobar def frozzify() # The real work happens here. end # ... Other tasks also in the :foobar namespace. end For me, this is great, because it allows me to separate the task dependencies from each other and to move them to another location entirely, and I've been able to drastically simplify things and isolate the dependencies. The Rakefile doesn't hit a require until you actually try to run a task. Previously this was causing serious issues because you couldn't even list the tasks without it blowing up. My problem is that I'm repeating this idiom very frequently. Notice the following patterns: For every namespace :xyz_abc, there is a corresponding class in tasks/... in the file tasks/[namespace].rb, with a class name that looks like XyzAbc. For every task in a particular namespace, there is an identically named method in the associated namespace class. For example, if namespace :foo_bar has a task :apples, you would expect to see def apples() ... inside the FooBar class, which itself is in tasks/foo_bar.rb. Every task :t defines a "meta-task" _t (that is, the task name prefixed with an underscore) which is used to do the actual work. I still want to be able to specify a desc-description for the tasks I define, and that will be different for each task. And, of course, I have a small number of tasks that don't follow the above pattern at all, so I'll be specifying those manually in my Rakefile. I'm sure that this can be refactored in some way so that I don't have to keep repeating the same idiom over and over, but I lack the experience to see how it could be done. Can someone give me an assist?

    Read the article

  • CFPreferences for another (or all) users

    - by James Turner
    I'm working on a background service which needs to ask several users' iTunes settings (the users will opt-in via a helper application which they run from their login). Is there an easy way to read the preferences for another user, than the current one, using CFPreferences? The docs, for example for CFPreferencesCopyValue, explicitly state: 'Do not use arbitrary user and host names, instead pass the pre-defined domain qualifier constants.', when passing the 'userName' argument to the various functions.

    Read the article

  • Using EhCache for session.createCriteria(...).list()

    - by James Smith
    I'm benchmarking the performance gains from using a 2nd level cache in Hibernate (enabling EhCache), but it doesn't seem to improve performance. In fact, the time to perform the query slightly increases. The query is: session.createCriteria(MyEntity.class).list(); The entity is: @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class MyEntity { @Id @GeneratedValue private long id; @Column(length=5000) private String data; //---SNIP getters and setters--- } My hibernate.cfg.xml is: <!-- all the normal stuff to get it to connect & map the entities plus:--> <property name="hibernate.cache.region.factory_class"> net.sf.ehcache.hibernate.EhCacheRegionFactory </property> The MyEntity table contains about 2000 rows. The problem is that before adding in the cache, the query above to list all entities took an average of 65 ms. After the cache, they take an average of 74 ms. Is there something I'm missing? Is there something extra that needs to be done that will increase performance?

    Read the article

  • CakePHP HABTM Filtering

    - by James Haigh
    Hi, I've got two tables - users and servers, and for the HABTM relationship, users_servers. Users HABTM servers and vice versa. I'm trying to find a way for Cake to select the servers that a user is assigned to. I'm trying things like $this->User->Server->find('all'); which just returns all the servers, regardless of whether they belong to the user. $this->User->Server->find('all', array('conditions' => array('Server.user_id' => 1))) just gives an unknown column SQL error. I'm sure I'm missing something obvious but just need someone to point me in the right direction. Thanks!

    Read the article

  • Incorrect string encodings

    - by James
    Note: I have read all of the related PHP, UTF-8, character encoding articles that are usually suggested, but my question relates to data inserted before I applied such techniques. I am wishing to retrospectively fix all character encoding problems. Now all connections are set as utf8 using PDO. PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES utf8' Unfortunately, a large amount of data was inserted that is of questionable encoding before I had implemented correct character encoding practices. As displayed by: $sql = "SELECT name FROM data LIMIT 3"; foreach ($pdo->query($sql) as $row) { $name = $row['name']; echo $name . "\n"; echo utf8_encode($name) . "\n"; echo utf8_decode($name) . "\n"; echo htmlspecialchars($name, ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_encode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo htmlspecialchars(utf8_decode($name), ENT_QUOTES, 'UTF-8') . "\n"; echo '<hr/>'; } Which produces: Antonín Dvořák AntonÃÆÃ­n DvoÃâ¦Ãâ¢ÃÆÃ¡k Anton??­n Dvo??????¡k Antonín Dvořák AntonÃÆÃ­n DvoÃâ¦Ãâ¢ÃÆÃ¡k ---------- Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ????? ?????????? Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶ ñÃâ¬Ã¡Ã´ ýáùáÿÃâ¬ÃµÃ¡Ã¶ ---------- Tiësto Tiësto Tiësto Tiësto Tiësto Tiësto ---------- When removing 'SET NAMES utf8' with PDO it produces the data: Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák Antonín DvoÅák Antonín DvoÃÂák Antonín Dvorák ---------- ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ???? ????????? Ô±ÖÕ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿ÖÕµÕ¡Õ¶ ???? ????????? ---------- Tiësto Tiësto Ti?sto Tiësto Tiësto ---------- And here is a dump of the database rows concerned: DROP TABLE IF EXISTS `data`; CREATE TABLE IF NOT EXISTS `data` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(80) NOT NULL, PRIMARY KEY (`id`), KEY `name` (`name`(10)), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0; INSERT INTO `data` (`id`, `name`) VALUES (0, 'Antonín Dvořák'), (1, 'Ô±Ö€Õ¡Õ´ Ô½Õ¡Õ¹Õ¡Õ¿Ö€ÕµÕ¡Õ¶'), (2, 'Tiësto'); The 3rd and 6th lines of the 3rd row "Tiësto" are then correctly echoed. I'm just unsure what is the best way to correct encodings/detect the encodings of bad strings and correct, etc.

    Read the article

  • How do I write recursive anonymous functions?

    - by James T Kirk
    In my continued effort to learn scala, I'm working through 'Scala by example' by Odersky and on the chapter on first class functions, the section on anonymous function avoids a situation of recursive anonymous function. I have a solution that seems to work. I'm curious if there is a better answer out there. From the pdf: Code to showcase higher order functions def sum(f: Int => Int, a: Int, b: Int): Int = if (a > b) 0 else f(a) + sum(f, a + 1, b) def id(x: Int): Int = x def square(x: Int): Int = x * x def powerOfTwo(x: Int): Int = if (x == 0) 1 else 2 * powerOfTwo(x-1) def sumInts(a: Int, b: Int): Int = sum(id, a, b) def sumSquares(a: Int, b: Int): Int = sum(square, a, b) def sumPowersOfTwo(a: Int, b: Int): Int = sum(powerOfTwo, a, b) scala> sumPowersOfTwo(2,3) res0: Int = 12 from the pdf: Code to showcase anonymous functions def sum(f: Int => Int, a: Int, b: Int): Int = if (a > b) 0 else f(a) + sum(f, a + 1, b) def sumInts(a: Int, b: Int): Int = sum((x: Int) => x, a, b) def sumSquares(a: Int, b: Int): Int = sum((x: Int) => x * x, a, b) // no sumPowersOfTwo My code: def sumPowersOfTwo(a: Int, b: Int): Int = sum((x: Int) => { def f(y:Int):Int = if (y==0) 1 else 2 * f(y-1); f(x) }, a, b) scala> sumPowersOfTwo(2,3) res0: Int = 12

    Read the article

  • Validating parameters according to a fixed reference

    - by James P.
    The following method is for setting the transfer type of an FTP connection. Basically, I'd like to validate the character input (see comments). Is this going overboard? Is there a more elegant approach? How do you approach parameter validation in general? Any comments are welcome. public void setTransferType(Character typeCharacter, Character optionalSecondCharacter) throws NumberFormatException, IOException { // http://www.nsftools.com/tips/RawFTP.htm#TYPE // Syntax: TYPE type-character [second-type-character] // // Sets the type of file to be transferred. type-character can be any // of: // // * A - ASCII text // * E - EBCDIC text // * I - image (binary data) // * L - local format // // For A and E, the second-type-character specifies how the text should // be interpreted. It can be: // // * N - Non-print (not destined for printing). This is the default if // second-type-character is omitted. // * T - Telnet format control (<CR>, <FF>, etc.) // * C - ASA Carriage Control // // For L, the second-type-character specifies the number of bits per // byte on the local system, and may not be omitted. final Set<Character> acceptedTypeCharacters = new HashSet<Character>(Arrays.asList( new Character[] {'A','E','I','L'} )); final Set<Character> acceptedOptionalSecondCharacters = new HashSet<Character>(Arrays.asList( new Character[] {'N','T','C'} )); if( acceptedTypeCharacters.contains(typeCharacter) ) { if( new Character('A').equals( typeCharacter ) || new Character('E').equals( typeCharacter ) ){ if( acceptedOptionalSecondCharacters.contains(optionalSecondCharacter) ) { executeCommand("TYPE " + typeCharacter + " " + optionalSecondCharacter ); } } else { executeCommand("TYPE " + typeCharacter ); } } }

    Read the article

  • Using virtual fields in Doctrine_Query

    - by James Maroney
    Is there a way to insert logic based on virtual fields into a Doctrine_Query? I have defined a virtual field in my model, "getStatus()" which I would ultimately like to utilize in a Where clause in my Doctrine_Query. ... ->AndWhere('x.status = ?',$status); "status", however, is not a column in the table it is instead computed by business logic in the model. Filtering the Collection after executing the query works in some situations, but not when a Doctrine_Pager is thrown in the mix, as it computes it's offsets and such before you have access to the Collection. Am I best off ditching Doctrine_Pager and rebuilding that functionality after modifying the Doctrine_Collection?

    Read the article

  • How can I store this kind of graph in neo4j for fast traversal?

    - by James
    This is a graph whose nodes exist in many connected components at once because a node's relationships are a collection of edge groups such that only one edge per edge group can be present at once. I need to be able to find all of the connected components that a node exists in. What would be the best way to store this graph in neo4j to quickly find all of the connected components that a node exists in? Is there a way to use the built in traversals to do this? Also: is there a name for this kind of graph? I'd appreciate any help/ideas. Update: Sorry for not being clear. All nodes are of the same type. Nodes have a variable number of edge groups. Exactly one edge from each edge group needs to be chosen for a particular connected component. I'm going to try to explain through example: Node x1 is related to: (x2 or x3 or x4) AND (x5 or x6) AND (x7) Node x2 is related to: (x8) AND (x9 or x10) So x1's first edge group is (x2, x3, x4), its second edge group is (x5, x6), and its third edge group is (x7). So here are a few connected components that x1 exists in: CC1: x1 is related to: x2, x5, x7 x2 is related to: x8 x9 CC2: x1 is related to: x2, x6, x7 x2 is related to: x8, x9 CC3: x1 is related to: x3, x5, x7 CC4: x1 is related to: x3, x6, x7 etc. I'm grateful for your help in this.

    Read the article

  • Callback for When jqGrid Finishes Reloading?

    - by James
    Hi, I am using the jqGrid plug-in and at one point I need to refresh the grid and set the selected row to match the record that I am showing in detail on another section of the page. I have the following code but it does not work: $("#AllActions").trigger("reloadGrid").setSelection(selectedRow); The selectedRow parameter comes from an event handler that gets called when the data is changed and the grid needs to be updated. I'm pretty sure that the problem is that the grid is not loaded when the selection is being set, because if I put a call to alert() between the calls to trigger() and setSelection(), it works. I would be grateful for any advice. [Edit]Looks like http://stackoverflow.com/questions/2529581/jqgrids-setselect-does-not-work-after-reloadgrid is related but did not get resolved.[/Edit]

    Read the article

  • How do I enforce the order of qmake library dependencies?

    - by James Oltmans
    I'm getting a lot of errors because qmake is improperly ordering the boost libraries I'm using. Here's what .pro file looks like QT += core gui TARGET = MyTarget TEMPLATE = app CONFIG += no_keywords \ link_pkgconfig SOURCES += file1.cpp \ file2.cpp \ file3.cpp PKGCONFIG += my_package \ sqlite3 LIBS += -lsqlite3 \ -lboost_signals \ -lboost_date_time HEADERS += file1.h\ file2.h\ file3.h FORMS += mainwindow.ui RESOURCES += Resources/resources.qrc This produces the following command: g++ -Wl,-O1 -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/lib/x86_64-linux-gnu -lboost_signals -lboost_date_time -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lQtGui -lQtCore Note: mylib1 and mylib2 are statically compiled by another project, placed in /usr/local/lib with an appropriate pkg-config .pc file pointing there. The .pro file references them via my_package in PKGCONFIG. The problem is not with pkg-config's output but with Qt's ordering. Here's the .pc file: prefix=/usr/local exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: my_package Description: My component package Version: 0.1 URL: http://example.com Libs: -L${libdir} -lmylib1 -lmylib2 Cflags: -I${includedir}/my_package/ The linking stage fails spectacularly as mylib1 and mylib2 come up with a lot of undefined references to boost libraries that both the app and mylib1 and mylib2 are using. We have another build method using scons and it properly orders things for the linker. It's build command order is below. g++ -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lboost_signals -lboost_date_time -lQtGui -lQtCore Note that the principle difference is the order of the boost libs. Scons puts them at the end just before QtGui and QtCore while qmake puts them first. The other differences in the compile commands are unimportant as I have hand modified the qmake produced make file and the simple reordering fixed the problem. So my question is, how do I enforce the right order in my .pro file despite what qmake thinks they should be?

    Read the article

  • How to define RequestMapping prioritization

    - by James Skidmore
    I have a situation where I need the following RequestMapping: @RequestMapping(value={"/{section}"}) ...method implementation here... @RequestMapping(value={"/support"}) ...method implementation here... There is an obvious conflict. My hope was that Spring would resolve this automatically and map /support to the second method, and everything else to the first, but it instead maps /support to the first method. How can I tell Spring to allow an explicit RequestMapping to override a RequestMapping with a PathVariable in the same place? (Edit - this is simplified, I know that having those two RequestMapping alone wouldn't make much sense)

    Read the article

  • JPA: persisting object, parent is ok but child not updated

    - by James.Elsey
    Hello, I have my domain object, Client, I've got a form on my JSP that is pre-populated with its data, I can take in amended values, and persist the object. Client has an abstract entity called MarketResearch, which is then extended by one of three more concrete sub-classes. I have a form to pre-populate some MarketResearch data, but when I make changes and try to persist the Client, it doesn't get saved, can someone give me some pointers on where I've gone wrong? My 3 domain classes are as follows (removed accessors etc) public class Client extends NamedEntity { @OneToOne @JoinColumn(name = "MARKET_RESEARCH_ID") private MarketResearch marketResearch; ... } @Inheritance(strategy = InheritanceType.JOINED) public abstract class MarketResearch extends AbstractEntity { ... } @Entity(name="MARKETRESEARCHLG") public class MarketResearchLocalGovernment extends MarketResearch { @Column(name = "CURRENT_HR_SYSTEM") private String currentHRSystem; ... } This is how I'm persisting public void persistClient(Client client) { if (client.getId() != null) { getJpaTemplate().merge(client); getJpaTemplate().flush(); } else { getJpaTemplate().persist(client); } } To summarize, if I change something on the parent object, it persists, but if I change something on the child object it doesn't. Have I missed something blatantly obvious? Thanks

    Read the article

  • Multiple resultsets from Oracle in Odp.net,without refcursors

    - by James L
    SQL Server is able to return the results of multiple queries in a single round-trip, e.g: select a, b, c from y; select d, e, f from z; Oracle doesn't like this syntax. It is possible to use reference cursors, like this: begin open :1 for select count(*) from a; open :2 for select count(*) from b; end; However, you incur a penalty in opening/closing cursors and you can hold database locks for an extended period. What I'd like to do is retrieve the results for these two queries in one shot, using Odp.net. Is it possible?

    Read the article

  • Structure map and generics (in XML config)

    - by James D
    Hi I'm using the latest StructureMap (2.5.4.264), and I need to define some instances in the xml configuration for StructureMap using generics. However I get the following 103 error: Unhandled Exception: StructureMap.Exceptions.StructureMapConfigurationException: StructureMap configuration failures: Error: 103 Source: Requested PluginType MyTest.ITest`1[[MyTest.Test,MyTest]] configured in Xml cannot be found Could not create a Type for 'MyTest.ITest`1[[MyTest.Test,MyTest]]' System.ApplicationException: Could not create a Type for 'MyTest.ITest`1[[MyTest.Test,MyTest]]' ---> System.TypeLoadException: Could not loa d type 'MyTest.ITest`1' from assembly 'StructureMap, Version=2.5.4.264, Culture=neutral, PublicKeyToken=e60ad81abae3c223'. at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& s tackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at StructureMap.Graph.TypePath.FindType() --- End of inner exception stack trace --- at StructureMap.Graph.TypePath.FindType() at StructureMap.Configuration.GraphBuilder.ConfigureFamily(TypePath pluginTypePath, Action`1 action) A simply replication of the code is as follows: public interface ITest<T> { } public class Test { } public class Concrete : ITest<Test> { } Which I then wish to define in the XML configuration something as follows: <DefaultInstance PluginType="MyTest.ITest`1[[MyTest.Test,MyTest]],MyTest" PluggedType="MyTest.Concrete,MyTest" Scope="Singleton" /> I've been racking my brain, however I can't see what I'm doing wrong - I've used Type.GetType to verify the type actually is valid which it is. Anyone have any ideas? Thanks !

    Read the article

  • How can I make the output from tapply() into a data.frame

    - by James Thompson
    I have a data.frame in R that looks like this: score rms template aln_id description 1 -261.410 4.951 2f22A.pdb 2F22A_1 S_00001_0000002_0 2 -231.987 21.813 1wb9A.pdb 1WB9A_4 S_00002_0000002_0 3 -263.722 4.903 2f22A.pdb 2F22A_3 S_00003_0000002_0 4 -269.681 17.732 1wbbA.pdb 1WBBA_6 S_00004_0000002_0 5 -258.621 19.098 1rxqA.pdb 1RXQA_3 S_00005_0000002_0 6 -246.805 6.889 1rxqA.pdb 1RXQA_15 S_00006_0000002_0 7 -281.300 16.262 1wbdA.pdb 1WBDA_11 S_00007_0000002_0 8 -271.666 4.193 2f22A.pdb 2F22A_2 S_00008_0000002_0 9 -277.964 13.066 1wb9A.pdb 1WB9A_5 S_00009_0000002_0 10 -261.024 17.153 1yy9A.pdb 1YY9A_2 S_00001_0000003_0 I can calculate summary statistics on the data.frame like this: > tapply( d$score, d$template, mean ) 1rxqA.pdb 1wb9A.pdb 1wbbA.pdb 1wbdA.pdb 1yy9A.pdb 2f22A.pdb -252.7130 -254.9755 -269.6810 -281.3000 -261.0240 -265.5993 Is there an easy way that I coerce this output back into a data.frame? I'd like for it to have these two columns: d$template mean I love tapply, but right now I'm cutting and pasting the results from tapply into a text file and hacking it up a bit to get the summary statistics that I want with appropriate names. This feels very wrong, and I'd like to do something better!

    Read the article

  • How do I return a variable from $.post() in jQuery? Closure variable?

    - by James Bao
    I am having trouble passing data retrieved from a $.post() function to use in other places in my code. I want to save the data as a variable and use it outside of the post() function. This is my code: var last_update = function() { $.post('/--/feed', {func:'latest', who:$.defaults.login}, function($j){ _j = JSON.parse($j); alert(_j.text); // This one works }); } alert(_j.text); // This one doesn't }; last_update(); //run the function Please help!

    Read the article

  • Can I use encrypt web.config with a custom protection provider who's assembly is not in the GAC?

    - by James
    I have written a custom protected configuration provider for my web.config. When I try to encrypt my web.config with it I get the following error from aspnet_iisreg aspnet_regiis.exe -pef appSettings . -prov CustomProvider (This is running in my MSBuild) Could not load file or assembly 'MyCustomProviderNamespace' or one of its dependencies. The system cannot find the file specified. After checking with the Fusion log, I confirm it is checking both the GAC, and 'C:/WINNT/Microsoft.NET/Framework/v2.0.50727/' (the location of aspnet_iisreg). But it cannot find the provider. I do not want to move my component into the GAC, I want to leave the custom assembly in my ApplicationBase to copy around to various servers without having to pull/push from the GAC. Here is my provider configuration in the web.config. <configProtectedData> <providers> <add name="CustomProvider" type="MyCustomProviderNamespace.MyCustomProviderClass, MyCustomProviderNamespace" /> </providers> </configProtectedData> Has anyone got any ideas?

    Read the article

  • acts-as-taggable-on: find tags with name LIKE, sort by tag_counts?

    - by James
    Hi I'm using the rails plugin acts-as-taggable-onand I'm trying to find the top 5 most used tags whose names match and partially match a given query. When I do User.skill_counts.order('count DESC').limit(5).where('name LIKE ?', params[:query]) This return the following error: ActiveRecord::StatementInvalid: SQLite3::SQLException: ambiguous column name: name: SELECT tags.*, COUNT(*) AS count FROM "tags" INNER JOIN users ON users.id = taggings.taggable_id LEFT OUTER JOIN taggings ON tags.id = taggings.tag_id AND taggings.context = 'skills' WHERE (taggings.taggable_type = 'User') AND (taggings.taggable_id IN(SELECT users.id FROM "users")) AND (name LIKE 'asd') GROUP BY tags.id, tags.name HAVING COUNT(*) > 0 ORDER BY count DESC LIMIT 5 But when I do User.skill_counts.first.name this returns "alliteration" I'd appreciate any help on this matter.

    Read the article

  • finding N contiguous zero bits in an integer to the left of the MSB from another

    - by James Morris
    First we find the MSB of the first integer, and then try to find a region of N contiguous zero bits within the second number which is to the left of the MSB from the first integer. Here is the C code for my solution: typedef unsigned int t; unsigned const t_bits = sizeof(t) * CHAR_BIT; _Bool test_fit_within_left_of_msb( unsigned width, t val1, t val2, unsigned* offset_result) { unsigned offbit = 0; unsigned msb = 0; t mask; t b; while(val1 >>= 1) ++msb; while(offbit + width < t_bits - msb) { mask = (((t)1 << width) - 1) << (t_bits - width - offbit); b = val2 & mask; if (!b) { *offset_result = offbit; return true; } if (offbit++) /* this conditional bothers me! */ b <<= offbit - 1; while(b <<= 1) offbit++; } return false; } Aside from faster ways of finding the MSB of the first integer, the commented test for a zero offbit seems a bit extraneous, but necessary to skip the highest bit of type t if it is set. I have also implemented similar algorithms but working to the right of the MSB of the first number, so they don't require this seemingly extra condition. How can I get rid of this extra condition, or even, are there far more optimal solutions?

    Read the article

  • sqlite3.OperationalError: database is locked - non-threaded application

    - by James C
    Hi, I have a Python application which throws the standard sqlite3.OperationalError: database is locked error. I have looked around the internet and could not find any solution which worked (please note that there is no multiprocesses/threading going on, and as you can see I have tried raising the timeout parameter). The sqlite file is stored on the local hard drive. The following function is one of many which accesses the sqlite database, and runs fine the first time it is called, but throws the above error the second time it is called (it is called as part of a for loop in another function): def update_index(filepath): path = get_setting('Local', 'web') stat = os.stat(filepath) modified = stat.st_mtime index_file = get_setting('Local', 'index') connection = sqlite3.connect(index_file, 30) cursor = connection.cursor() head, tail = os.path.split(filepath) cursor.execute('UPDATE hwlive SET date=? WHERE path=? AND name=?;', (modified, head, tail)) connection.commit() connection.close() Many thanks.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >