Search Results

Search found 5568 results on 223 pages for 'dependency analysis'.

Page 189/223 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • RESTfully Nesting Resource Routes with Single Identifiers

    - by Craig Walker
    In my Rails app I have a fairly standard has_many relationship between two entities. A Foo has zero or more Bars; a Bar belongs to exactly one Foo. Both Foo and Bar are identified by a single integer ID value. These values are unique across all of their respective instances. Bar is existence dependent on Foo: it makes no sense to have a Bar without a Foo. There's two ways to RESTfully references instances of these classes. Given a Foo.id of "100" and a Bar.id of "200": Reference each Foo and Bar through their own "top-level" URL routes, like so: /foo/100 /bar/200 Reference Bar as a nested resource through its instance of Foo: /foo/100 /foo/100/bar/200 I like the nested routes in #2 as it more closely represents the actual dependency relationship between the entities. However, it does seem to involve a lot of extra work for very little gain. Assuming that I know about a particular Bar, I don't need to be told about a particular Foo; I can derive that from the Bar itself. In fact, I probably should be validating the routed Foo everywhere I go (so that you couldn't do /foo/150/bar/200, assuming Bar 200 is not assigned to Foo 150). Ultimately, I don't see what this brings me. So, are there any other arguments for or against these two routing schemes?

    Read the article

  • How to reliably replace a library-defined error handler with my own?

    - by sharptooth
    On certain error cases ATL invokes AtlThrow() which is implemented as ATL::AtlThrowImpl() which in turn throws CAtlException. The latter is not very good - CAtlException is not even derived from std::exception and also we use our own exceptions hierarchy and now we will have to catch CAtlException separately here and there which is lots of extra code and error-prone. Looks like it is possible to replace ATL::AtlThrowImpl() with my own handler - define _ATL_CUSTOM_THROW and define AtlThrow() to be the custom handler before including atlbase.h - and ATL will call the custom handler. Not so easy. Some of ATL code is not in sources - it comes compiled as a library - either static or dynamic. We use the static - atls.lib. And... it is compiled in such way that it has ATL::ThrowImpl() inside and some code calling it. I used a static analysis tool - it clearly shows that there're paths on which the old default handler is called. To ensure I even tried to "reimplement" ATL::AtlThrowImpl() in my code. Now the linker says it sees two declarations of ATL::AtlThrowImpl() which I suppose confirms that there's another implementation that can be called by some code. How can I handle this? How do I replace the default handler completely and ensure that the default handler is never called?

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Why use Django on Google App Engine?

    - by Travis Bradshaw
    When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google. To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django. It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack. Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus. I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. Thanks in advance for your time!

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • Error while using JSFUnit/HtmlUnit/CSSParser

    - by brianf
    We've just recently converted our project to using Maven for builds and dependency management, and after the conversion I'm getting the following exception while trying to run any JSFUnit tests in my project. Exception class=[java.lang.UnsupportedOperationException] com.gargoylesoftware.htmlunit.ScriptException: CSSRule com.steadystate.css.dom.CSSCharsetRuleImpl is not yet supported. at com.gargoylesoftware.htmlunit.javascript.JavaScriptEngine$HtmlUnitContextAction.run(JavaScriptEngine.java:527) at net.sourceforge.htmlunit.corejs.javascript.Context.call(Context.java:537) ... All the dependencies and JARs for JSFUnit were pulled with Maven using the JBoss repository (http://repository.jboss.com/maven2/). We're using the following dependencies in the project: jboss-jsfunit-core 1.2.0.Final jboss-jsfunit-richfaces 1.2.0.Final richfaces-ui 3.3.2.GA openfaces 2.0 JSF 1.2_12 Facelets 1.1.14 Before the dependencies were being managed by Maven, we were able to run our JSFUnit tests just fine. I was able to semi-fix the issue by using a ss_css2.jar file that someone had tucked into our WEB-INF/lib directory (from before the Maven conversion). I'm hoping to find out if there's something else I can do to fix the dependencies in Maven rather than resorting to managing some of the dependencies myself.

    Read the article

  • rake test:units fails with status ()

    - by ander163
    New user, haven't been building tests as I go, so I'm an idiot. The application is running, but the tests fail. Here is what appears to be relevant: .... ** Execute test:units /usr/local/bin/ruby -I"lib:test" "/usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/event_test.rb" "test/unit/helpers/calendar1_helper_test.rb" "test/unit/helpers/events_helper_test.rb" "test/unit/helpers/homepage_helper_test.rb" "test/unit/helpers/main_helper_test.rb" "test/unit/helpers/mobile_helper_test.rb" "test/unit/helpers/notes_helper_test.rb" "test/unit/helpers/password_resets_helper_test.rb" "test/unit/helpers/projects_helper_test.rb" "test/unit/helpers/search_helper_test.rb" "test/unit/helpers/start_helper_test.rb" "test/unit/helpers/superadmin_helper_test.rb" "test/unit/helpers/tasks_helper_test.rb" "test/unit/helpers/user_sessions_helper_test.rb" "test/unit/helpers/users_helper_test.rb" "test/unit/note_test.rb" "test/unit/notifier_test.rb" "test/unit/project_test.rb" "test/unit/task_test.rb" "test/unit/user_session_test.rb" "test/unit/user_test.rb" /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement /usr/lib/ruby/gems/1.8/gems/hpricot-0.6.164/lib/universal-java1.6/fast_xs.bundle: [BUG] Segmentation fault ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0] rake aborted! Command failed with status (): [/usr/local/bin/ruby -I"lib:test" "/usr/loc...] /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:995:in sh' /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1010:incall'

    Read the article

  • Error while trying to install Community Engine: NameError - "Undefined local variable or method 'map

    - by floatingfrisbee
    I'm trying to install Community Engine using the instructions here: http://github.com/bborn/communityengine At first I thought it might be because I had Rails 2.3.5 and desert 0.5.3 which were higher versions than what was mentioned on the installation site. However moving to rails 2.3.4 and desert 0.5.2 did not work. Any ideas as to what might be going on? $ script/generate plugin_migration /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecat ed and will be removed on or after August 2010. Use #requirement /cygdrive/c/users/me/jesse/projects/ceng1/config/routes.rb:2: undefined local variable or method `map' for main:Object (NameError ) from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:147:in `load_without_new_constant _marking' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:147:in `load_without_desert' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:18:in `load' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:32:in `__each_matching_file' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.2/lib/desert/ruby/object.rb:17:in `load' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `each' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:266:in `reload!' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:537:in `initialize_routing' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:188:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `run' from /cygdrive/c/users/me/jesse/projects/ceng1/config/environment.rb:6 from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:1 from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/generate:3

    Read the article

  • How can I improve this design?

    - by klausbyskov
    Let's assume that our system can perform actions, and that an action requires some parameters to do its work. I have defined the following base class for all actions (simplified for your reading pleasure): public abstract class BaseBusinessAction<TActionParameters> : where TActionParameters : IActionParameters { protected BaseBusinessAction(TActionParameters actionParameters) { if (actionParameters == null) throw new ArgumentNullException("actionParameters"); this.Parameters = actionParameters; if (!ParametersAreValid()) throw new ArgumentException("Valid parameters must be supplied", "actionParameters"); } protected TActionParameters Parameters { get; private set; } protected abstract bool ParametersAreValid(); public void CommonMethod() { ... } } Only a concrete implementation of BaseBusinessAction knows how to validate that the parameters passed to it are valid, and therefore the ParametersAreValid is an abstract function. However, I want the base class constructor to enforce that the parameters passed are always valid, so I've added a call to ParametersAreValid to the constructor and I throw an exception when the function returns false. So far so good, right? Well, no. Code analysis is telling me to "not call overridable methods in constructors" which actually makes a lot of sense because when the base class's constructor is called the child class's constructor has not yet been called, and therefore the ParametersAreValid method may not have access to some critical member variable that the child class's constructor would set. So the question is this: How do I improve this design? Do I add a Func<bool, TActionParameters> parameter to the base class constructor? If I did: public class MyAction<MyParameters> { public MyAction(MyParameters actionParameters, bool something) : base(actionParameters, ValidateIt) { this.something = something; } private bool something; public static bool ValidateIt() { return something; } } This would work because ValidateIt is static, but I don't know... Is there a better way? Comments are very welcome.

    Read the article

  • Getting content from PHP: Trouble with POST and query.

    - by vgm64
    Apologies for my longest question on SO ever. I'm trying to interface with a php frontend for a mysql database in ROOT (a CERN framework in C++ for high energy physics analysis). To start off with, I tried to get this php interface to play nice with wget and curl first because I'm more familiar with them. The following command works: wget --post-data "hostname=localhost:3306&un=joeuser&pw=psswd&myquery=show_spazio_databases;" http://some.host.edu/log/log_query_matlab.php The results are: database1 database2 That's good. If I leave out the --post-data then I get the result: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'admin'@'localhost' (using password: NO) in /log/log_query_matlab.php on line 6 i'm dead! Access denied for user 'admin'@'localhost' (using password: NO) Warning: mysql_query() [function.mysql-query]: Access denied for user 'admin'@'localhost' (using password: NO) in /log/log_query_matlab.php on line 29 Warning: mysql_query() [function.mysql-query]: A link to the server could not be established in /log/log_query_matlab.php on line 29 I have access to the php script (read only), but the error itself isn't too important. What matters it that using ROOT, I use a function called as socket.SendRaw(message, message.Length()) (socket is a TSocket) and this gives me the same "error" as wget without the post data switch if my "message" is "POST http://some.host.edu/log/log_query_matlab.php?hostname=localhost:3306&un=joeuser&pw=psswd&myquery=show_spazio_databases" This may be in vain, but does someone knows a way I should format the "message" that includes something that is equivalent to the --post-data switch. Or, is there a standard way to format POST requests in a single line (I've seen multi-line stuff. Is that right?) Sorry I'm clueless! PS. The mysql query is show databases but the space has been replaced with _spazio_, Italian for space. The author of the db and php interface requires it (and various replacements for symbols), but has anyone seen this before? Trying to troubleshoot that was terrible!

    Read the article

  • Delphi - working with dll's for beginners

    - by doubleu
    Hi there, I'm a total newbie regarding to DLL. And I don't need to creat them I just need to use one. I've read some tutorials, but they weren't as helpful as I hoped. Here's the way I started: I've downloaded the SDK which I need to use (ESTOS Tapi Server). I read in the docs and spotted out the DLL which I need to use, which is the ENetSN.dll, and so I registered it. Next I've used the Dependency Walker to take a look at the DLL - and I was wondering because there are only these functions: DllCanUnloadNow, DllGetClassObject, DllRegisterServer and DllUnregisterServer, and these are not the functions mentioned in the docs. I think I have to call DllGetClassObject to get an object out of the DLL with which I can start to work. Unfortunately the tutorials I found doesn't mentioned how this is done (or I didn't understood it). There are also 3 exmaples delivered for VB and C++, but I wasn't able to 'translate' them into delphi. If somebody knows a tutorial where this is explained or could give me a pointer to the right direcetion, I would be very thankful .

    Read the article

  • Mahout - Error when try out wikipedia exmaples

    - by Li'
    Note this post is similar to Caused by: java.lang.ClassNotFoundException: classpath but different error message. When I try to run Wikipedia Bayes Example from https://cwiki.apache.org/confluence/display/MAHOUT/Wikipedia+Bayes+Example When I ran the following command : lis-macbook-pro:mahout-distribution-0.8 Li$ mahout wikipediaXMLSplitter -d examples/temp/enwiki-latest-pages-articles10.xml -o wikipedia/chunks -c 64 I got error message: MAHOUT_LOCAL is set, so we don't add HADOOP_CONF_DIR to classpath. MAHOUT_LOCAL is set, running locally SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/Users/Li/File/Java/mahout-distribution-0.8/examples/target/mahout-examples-0.8-job.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/Users/Li/File/Java/mahout-distribution-0.8/examples/target/dependency/slf4j-jcl-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.JCLLoggerFactory] Oct 21, 2013 4:25:47 PM org.slf4j.impl.JCLLoggerAdapter warn WARNING: Unable to add class: wikipediaXMLSplitter java.lang.ClassNotFoundException: wikipediaXMLSplitter at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:171) at org.apache.mahout.driver.MahoutDriver.addClass(MahoutDriver.java:236) at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:127) I am using Hadoop 1.2 and Mahout 0.8. mahout-distribution-0.8/bin has been added to $PATH. $MAHOUT_LOCAL is set to "True", so it runs locally. I dont know why I got "Unable to add class: wikipediaXMLSplitter"

    Read the article

  • Hardware acceleration issue with WPF application - Analyzing crash dumps

    - by Appu
    I have a WPF application crashing on the client machine. My initial analysis shows that this is because of H/W acceleration and disabling H/W acceleration at a registry level solves the issue. Now I have to make sure that this is caused because of H/W acceleration. I have the crash dumps available which has a stack trace, 9c52b020 80636ac7 9c52b0bc e1174818 9c52b0c0 nt!_SEH_prolog+0x1a 9c52b038 806379d6 e10378a4 9c52b0bc e1174818 nt!CmpQuerySecurityDescriptorInfo+0x23 9c52b084 805bfe5b e714b160 00000001 9c52b0bc nt!CmpSecurityMethod+0xce 9c52b0c4 805c01c8 e714b160 9c52b0f0 e714b15c nt!ObpGetObjectSecurity+0x99 9c52b0f4 8062f28f e714b160 8617f008 00000001 nt!ObCheckObjectAccess+0x2c 9c52b140 8062ff30 e1038008 0066a710 cde2b714 nt!CmpDoOpen+0x2d5 9c52b340 805bf488 0066a710 0066a710 8617f008 nt!CmpParseKey+0x5a6 9c52b3b8 805bba14 00000000 9c52b3f8 00000240 nt!ObpLookupObjectName+0x53c 9c52b40c 80625696 00000000 8acad448 00000000 nt!ObOpenObjectByName+0xea 9c52b508 8054167c 9c52b828 82000000 9c52b5ac nt!NtOpenKey+0x1c8 9c52b508 80500699 9c52b828 82000000 9c52b5ac nt!KiFastCallEntry+0xfc 9c52b58c 805e701e 9c52b828 82000000 9c52b5ac nt!ZwOpenKey+0x11 9c52b7fc 805e712a 00000002 805e70a0 00000000 nt!RtlpGetRegistryHandleAndPath+0x27a 9c52b844 805e73e3 9c52b864 00000014 9c52bbb8 nt!RtlpQueryRegistryGetBlockPolicy+0x2e 9c52b86c 805e79eb 00000003 e8af79dc 00000014 nt!RtlpQueryRegistryDirect+0x4b 9c52b8bc 805e7f10 e8af79dc 00000003 9c52b948 nt!RtlpCallQueryRegistryRoutine+0x369 9c52bb58 b8f5bca4 00000005 e6024b30 9c52bbb8 nt!RtlQueryRegistryValues+0x482 WARNING: Stack unwind information not available. Following frames may be wrong. 9c52bc00 b8f20a5b 00000005 85f4204c 85f4214c igxpmp32+0x44ca4 9c52c280 b8f1cc7b 890bd358 9c52c2b0 00000000 igxpmp32+0x9a5b 9c52c294 b8f11729 890bd358 9c52c2b0 00000a0c igxpmp32+0x5c7b 9c52c358 804ef19f 890bd040 86d2dad0 0000080c VIDEOPRT!pVideoPortDispatch+0xabf 9c52c368 bf85e8c2 9c52c610 bef6ce84 00000014 nt!IopfCallDriver+0x31 9c52c398 bf85e93c 890bd040 00232150 9c52c3f8 win32k!GreDeviceIoControl+0x93 9c52c3bc bebafc7b 890bd040 00232150 9c52c3f8 win32k!EngDeviceIoControl+0x1f 9c52d624 bebf3fa9 890bd040 bef2a28c bef2a284 igxpdx32+0x8c7b 9c52d6a0 8054167c 9c52da28 b915d000 9c52d744 igxpdx32+0x4cfa9 9c52d6a0 00000000 9c52da28 b915d000 9c52d744 nt!KiFastCallEntry+0xfc How do I ensure that crash is caused by H/W acceleration issue by looking at the above data? I am guessing VIDEOPRT!pVideoPortDispatch+0xabf indicates some error with the rendering. Is that correct? I am using WinDebug to view the crash dump.

    Read the article

  • Is jQuery always the answer?

    - by Kibbee
    I've come across a couple questions, such as this one, and I really have to wonder why "Use jQuery" seems to be the answer when somebody asks how to do something in JavaScript. I understand that jQuery can save you a lot of time, and can help you out a lot, especially when you are doing a lot of fancy JavaScript in your site. However, in instances like this, and in many other instances, it seems like it's just jumping around the problem instead of answering the question. I also feel like this builds too much dependency into libraries. I've seen way too many developers that simply rely too much on libraries, and if they encounter a situation where they didn't have the library, they would be completely unable to function. I feel like there are already enough developers who don't know JavaScript, without just telling everybody to not learn JavaScript, and use jQuery. So, just to reiterate the question. Do you think there's too much of a tendency to use jQuery, for small pieces of JavaScript, when most of the functionality of jQuery isn't being used. Should developers be fluent in the use of bare JavaScript so they don't get too dependent on using libraries? [Additional related conversation topic] Does the existence of jQuery give too much slack to web browser developers who write the JavaScript engines? If we just have workarounds to cover all the inconsistencies in JavaScript, what pressure is there on browser makers to ensure that their JavaScript library works as it should. I feel like this extrapolates the same problem discussed in SO Podcast #36 of "be conservative in what you send, liberal in what you accept". By being so liberal with bad JavaScript engines, and using a common library to work around the flaws, we are promoting their use, and extending the problem.

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • MSTest on x64 C++/CLI

    - by Oyvind
    I got a problem using MSTest on x64: The test project depends on a couple of C++/CLI assemblies, and fails to load for some reason. In Visual Studio, I get (stripped down): Error loading D:\xxx\Xxx.Test.dll: Unable to load the test container 'D:\xxx\Xxx.Test.dll' or one of its dependencies. Error details: System.BadImageFormatException: Could not load file or assembly 'Common.Geometry.Native, Version=1.1.4574.22395, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format. Running MSTest manually in a command prompt, I get: Unable to load the test container 'D:\xxx\Xxx.Test.dll' or one of its dependencies. Error details: System.IO.FileNotFoundException: Could not load file or assembly 'Common.Geometry.Native, Version=1.1.4574.22395, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. Details worth mentioning: The test project itself is compiled using 'Any Cpu'. I use a x64 specific testrunconfig Dependency walker shows no missing native dependencies in the C++/CLI assembly (Common.Geometry.Native) Even more interesting, there is another test project in the same solution using the same C++/CLI assembly (Common.Geometry.Native), and it runs without any problems. I have also verified that there are no 32bit assemblies/dlls interfering. Any suggestions is welcome !

    Read the article

  • Pass enum value to method which is called by dynamic object

    - by user329588
    hello. I'm working on program which dynamically(in runtime) loads dlls. For an example: Microsoft.AnalysisServices.dll. In this dll we have this enum: namespace Microsoft.AnalysisServices { [Flags] public enum UpdateOptions { Default = 0, ExpandFull = 1, AlterDependents = 2, } } and we also have this class Cube: namespace Microsoft.AnalysisServices { public sealed class Cube : ... { public Cube(string name); public Cube(string name, string id); .. .. .. } } I dynamically load this dll and create object Cube. Than i call a method Cube.Update(). This method deploy Cube to SQL Analysis server. But if i want to call this method with parameters Cube.Update(UpdateOptions.ExpandFull) i get error, because method doesn't get appropriate parameter. I have already tried this, but doesn't work: dynamic updateOptions = AssemblyLoader.LoadStaticAssembly("Microsoft.AnalysisServices", "Microsoft.AnalysisServices.UpdateOptions");//my class for loading assembly Array s = Enum.GetNames(updateOptions); dynamic myEnumValue = s.GetValue(1);//1 = ExpandFull dynamicCube.Update(myEnumValue);// == Cube.Update(UpdateOptions.ExpandFull) I know that error is in parameter myEnumValue but i don't know how to get dynamically enum type from assembly and pass it to the method. Does anybody know the solution? Thank you very much for answers and help!

    Read the article

  • How can I use a class with the same name from another namespace in my class?

    - by Beau Simensen
    I have two classes with the same name in different namespaces. I want one of these classes to reference the other class. The reason is that I am migrating to some newer code and I want to update the old code to simply pass through to the newer code. Here is a super basic example: namespace project { namespace legacy { class Content { public: Content(const string& url) : url_(url) { } string url() { return url_; } private: string url_; }; }} // namespace project::legacy; namespace project { namespace current { class Content { public: Content(const string& url) : url_(url) {} string url() { return url_; } private: string url_; }} // namespace project::current; I expected to be able to do the following to project::legacy::Content, but I am having trouble with some linker issues. Is this an issue with how I'm trying to do this, or do I need to look more closely at my project files to see if I have some sort of weird dependency issues? #include "project/current/Content.h" namespace project { namespace legacy { class Content { public: Content(const string& url) : actualContent_(url) { } string url() { return actualContent_.url(); } private: project::current::Content actualContent_; }; }} // namespace project::legacy; The test application compiles fine if I try to reference an instance of project::current::Content but if I try to reference project::current::Content from project::legacy::Content I get an: undefined reference to `project::current::Content::Content(...)` UPDATE As it turns out, this was a GNU Autotoolset issue and was unrelated to the actual topic. Thanks to everyone for their help and suggestions!

    Read the article

  • Encryption puzzle / How to create a ProxyStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documenation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the ProxyStub looks like.

    Read the article

  • Encryption puzzle / How to create a PassStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documentation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the PassStub looks like.

    Read the article

  • How to delay static initialization within a property

    - by Mystagogue
    I've made a class that is a cross between a singleton (fifth version) and a (dependency injectable) factory. Call this a "Mono-Factory?" It works, and looks like this: public static class Context { public static BaseLogger LogObject = null; public static BaseLogger Log { get { return LogFactory.instance; } } class LogFactory { static LogFactory() { } internal static readonly BaseLogger instance = LogObject ?? new BaseLogger(null, null, null); } } //USAGE EXAMPLE: //Optional initialization, done once when the application launches... Context.LogObject = new ConLogger(); //Example invocation used throughout the rest of code... Context.Log.Write("hello", LogSeverity.Information); The idea is for the mono-factory could be expanded to handle more than one item (e.g. more than a logger). But I would have liked to have made the mono-factory look like this: public static class Context { private static BaseLogger LogObject = null; public static BaseLogger Log { get { return LogFactory.instance; } set { LogObject = value; } } class LogFactory { static LogFactory() { } internal static readonly BaseLogger instance = LogObject ?? new BaseLogger(null, null, null); } } The above does not work, because the moment the Log property is touched (by a setter invocation) it causes the code path related to the getter to be executed...which means the internal LogFactory "instance" data is always set to the BaseLogger (setting the "LogObject" is always too late!). So is there a decoration or other trick I can use that would cause the "get" path of the Log property to be lazy while the set path is being invoked?

    Read the article

  • Analyzing Windows crash dumps generated on XP/32 machines with Win7/64 ?

    - by Martin
    We have a problem with analyzing our Windows crash-dumps that were created on customer Windows XP/32 boxes on our development machines. Many of our development machines are now Win7/64 boxes, but it appears that the crash-dumps generated under Windows XP cannot full resolve their binary dependency, thereby leading to warnings when displaying the call stacks in Visual Studio (2005). For example, the msvcr80.dll cannot be resolved when loaded from a Win7 machine when the dump was generated on Windows XP: On XP, the WinSxS path appears to be C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll -- on Win7, the WinSxS path to the same DLL version seems to be: x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d08d7da0442a985d (I got this info from a forum thread on codeguru that link to an msdn article.) Visual Studio (2005) can now no longer correctly resolve the binaries for the crash-dump. How can I get Visual Studio to resolve all the correct binaries for my dump file? Note: I have already correctly set up the symbol server. The public symbols for most system DLLs (kernel32.dll, etc) and our symbols of our own DLLs are correctly loaded. It is just that the symbols of DLLs that reside in the WinSxS folder are not loaded, because it appears that Vista/7 uses a different path scheme for these DLLs than XP does and therefore Visual Studio cannot find the dll (not the pdb) on the local dev machine and so cannot load the corresponding symbols for the dump file.

    Read the article

  • Reading email address from contacts fails with weird memory issue

    - by CapsicumDreams
    Hi all, I'm stumped. I'm trying to get a list of all the email address a person has. I'm using the ABPeoplePickerNavigationController to select the person, which all seems fine. I'm setting my ABRecordRef personDealingWith; from the person argument to - (BOOL)peoplePickerNavigationController:(ABPeoplePickerNavigationController *)peoplePicker shouldContinueAfterSelectingPerson:(ABRecordRef)person property:(ABPropertyID)property identifier:(ABMultiValueIdentifier)identifier { and everything seems fine up till this point. The first time the following code executes, all is well. When subsequently run, I can get issues. First, the code: // following line seems to make the difference (issue 1) // NSLog(@"%d", ABMultiValueGetCount(ABRecordCopyValue(personDealingWith, kABPersonEmailProperty))); // construct array of emails ABMultiValueRef multi = ABRecordCopyValue(personDealingWith, kABPersonEmailProperty); CFIndex emailCount = ABMultiValueGetCount(multi); if (emailCount 0) { // collect all emails in array for (CFIndex i = 0; i < emailCount; i++) { CFStringRef emailRef = ABMultiValueCopyValueAtIndex(multi, i); [emailArray addObject:(NSString *)emailRef]; CFRelease(emailRef); } } // following line also matters (issue 2) CFRelease(multi); If compiled as written, the are no errors or static analysis problems. This crashes with a *** -[Not A Type retain]: message sent to deallocated instance 0x4e9dc60 error. But wait, there's more! I can fix it in either of two ways. Firstly, I can uncomment the NSLog at the top of the function. I get a leak from the NSLog's ABRecordCopyValue every time through, but the code seems to run fine. Also, I can comment out the CFRelease(multi); at the end, which does exactly the same thing. Static compilation errors, but running code. So without a leak, this function crashes. To prevent a crash, I need to haemorrhage memory. Neither is a great solution. Can anyone point out what's going on?

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >