Search Results

Search found 37174 results on 1487 pages for 'java runtime'.

Page 1343/1487 | < Previous Page | 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350  | Next Page >

  • Dereferencing the null pointer

    - by zilgo
    The standard says that dereferencing the null pointer leads to undefined behaviour. But what is "the null pointer"? In the following code, what we call "the null pointer": struct X { static X* get() { return reinterpret_cast<X*>(1); } void f() { } }; int main() { X* x = 0; (*x).f(); // the null pointer? (1) x = X::get(); (*x).f(); // the null pointer? (2) x = reinterpret_cast<X*>( X::get() - X::get() ); (*x).f(); // the null pointer? (3) (*(X*)0).f(); // I think that this the only null pointer here (4) } My thought is that dereferencing of the null pointer takes place only in the last case. Am I right? Is there difference between compile time null pointers and runtime according to C++ Standard?

    Read the article

  • Should I be using Expression Blend to design really dynamic UIs?

    - by Robert Rossney
    My company's product is, at its core, a framework for developing metadata-driven UIs. I don't know how to characterize it less succinctly than that, and hope I won't need to for purposes of this question, but we'll see. I've been trying to come up to speed on WPF, and have been building UI prototypes here and there, and recently I decided to see if I could use Expression Blend to help with the design of these UIs. And I'm pretty mystified at this point. It appears to me as though Expresssion Blend is designed with the expectation that you already know all of the objects that are going to be present in the UI at design time. But our program generates these object dynamically at runtime. For instance, a data row might be presented in a horizontal StackPanel containing alternating TextBlocks (for captions) and TextBoxes (for data fields). The number of these objects depends on metadata about the number of columns in the data row. I can, pretty readily, write code that runs through a metadata record and populates a StackPanel dynamically, setting up the binding of all of the controls to properties in either the data or metadata. (A TextBox's Width might be bound to metadata, while its Text is bound to data.) But I can't even begin to figure out how to do something like this in Expression Blend. I can manually create all these controls, so that I have a set of controls that I can apply styles to and work out the visual design of the app, but it's really a pain to do this. I can write code that goes through my data model and emits XAML for all these controls, I suppose, and then copy and paste it. But I'm going to feel really stupid if it turns out there a way to do this sort of thing in Expression Blend and I've dropped back and punted because I'm too dim to figure out the right way to think of it. Is this enough information for someone to try formulating an answer?

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • Segmentation fault on certain inputs and not others

    - by Brandon Schwandt
    Heres a function I wrote that has some debugging elements in it already. When i enter either a "y" or a "Y" as the input I get a segmentation fault during runtime. When I enter any other value the code runs. The seg fault kicks out after it scans and gives me the response but before the "scan worked" line is output. DOn't know why it would act like this only on these values. If anyone needs the function call I have that as well. query_user(char *response [10]) { printf("response after query call before clear=%s\n",response); strcpy(response,""); printf("response after clearing before scan=%s\n",response); printf("Enter another person into the line? y or n\n"); scanf("%s", response); printf("response after scan=%s\n",response); printf("scan worked"); } main() { char response [10]; strcpy(response,"y"); printf("response=%s\n",response); printf("When finished with program type \"done\" to exit\n"); while (strcmp(response,"done") != 0) { printf("response after while loop and before query call=%s\n",response); query_user(&response); } } output on error: response after query call before clear=y response after clearing before scan= Enter another person into the line? y or n y response after scan=y Segmentation Fault (core dumped) output on non-error: response after query call before clear=y response after clearing before scan= Enter another person into the line? y or n n response after scan=n scan worked Cycle number 0 (program continues to run outside this function)

    Read the article

  • Creating a C++ DLL and then using it in C#

    - by Major
    Ok I'm trying to make a C++ DLL that I can then call and reference in a c# App. I've already made a simple dll using the numberous guides out there, however when I try to reference it in the C# app I get the error Unable to load DLL 'SDES.dll': The specified module could not be found. The code for the program is as follows (bear with me I'm going to include all the files) //These are the DLL Files. ifndef TestDLL_H define TestDLL_H extern "C" { // Returns a + b __declspec(dllexport) double Add(double a, double b); // Returns a - b __declspec(dllexport) double Subtract(double a, double b); // Returns a * b __declspec(dllexport) double Multiply(double a, double b); // Returns a / b // Throws DivideByZeroException if b is 0 __declspec(dllexport) double Divide(double a, double b); } endif //.cpp include "test.h" include using namespace std; extern double __cdecl Add(double a, double b) { return a + b; } extern double __cdecl Subtract(double a, double b) { return a - b; } extern double __cdecl Multiply(double a, double b) { return a * b; } extern double __cdecl Divide(double a, double b) { if (b == 0) { throw new invalid_argument("b cannot be zero!"); } return a / b; } //C# Program using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace ConsoleApplication1 { class Program { [DllImport("SDES.dll")] public static extern void SimulateGameDLL(int a, int b); static void Main(string[] args) { SimulateGameDLL(1, 2); //Error here... } } } Anyone have any idea's what I may be missing in my program? Let me know if I missed some code or if you have any questions.

    Read the article

  • Code Contracts: Do we have to specify Contract.Requires(...) statements redundantly in delegating me

    - by herzmeister der welten
    I'm intending to use the new .NET 4 Code Contracts feature for future development. This made me wonder if we have to specify equivalent Contract.Requires(...) statements redundantly in a chain of methods. I think a code example is worth a thousand words: public bool CrushGodzilla(string weapon, int velocity) { Contract.Requires(weapon != null); // long code return false; } public bool CrushGodzilla(string weapon) { Contract.Requires(weapon != null); // specify contract requirement here // as well??? return this.CrushGodzilla(weapon, int.MaxValue); } For runtime checking it doesn't matter much, as we will eventually always hit the requirement check, and we will get an error if it fails. However, is it considered bad practice when we don't specify the contract requirement here in the second overload again? Also, there will be the feature of compile time checking, and possibly also design time checking of code contracts. It seems it's not yet available for C# in Visual Studio 2010, but I think there are some languages like Spec# that already do. These engines will probably give us hints when we write code to call such a method and our argument currently can or will be null. So I wonder if these engines will always analyze a call stack until they find a method with a contract that is currently not satisfied? Furthermore, here I learned about the difference between Contract.Requires(...) and Contract.Assume(...). I suppose that difference is also to consider in the context of this question then?

    Read the article

  • How to marshal an object and its content (also objects)

    - by Waldo Spek
    I have a question for which I suspect the answer is a bit complex. At this moment I am programming a DLL (class library) in C#. This DLL uses a 3rd party library and therefore deals with 3rd party objects of which I do not have the source code. Now I am planning to create another DLL, which is going to be used in a later stadium in my application. This second DLL should use the 3rd party objects (with corresponding object states) created by the first DLL. Luckily the 3rd party objects extend the MarshalByRefObject class. I can marshal the objects using System.Runtime.Remoting.Marshal(...). I then serialize the objects using a BinaryFormatter and store the objects as a byte[] array. All goes well. I can deserialize and unmarshal in a the opposite way and end up with my original 3rd party objects...so it appears... Nevertheless, when calling methods on my 3rd party deserialized objects I get object internal exceptions. Normally these methods return other 3rd party objects, but (obviously - I guess) now these objects are missing because they weren't serialized. Now my global question: how would I go about marshalling/serializing all the objects which my 3rd party objects reference...and cascade down the "reference tree" to obtain a full and complete serialized object? Right now my guess is to preprocess: obtain all the objects and build my own custom object and serialize it. But I'm hoping there is some other way...

    Read the article

  • Linq-to-XML explicit casting in a generic method

    - by vlad
    I've looked for a similar question, but the only one that was close didn't help me in the end. I have an XML file that looks like this: <Fields> <Field name="abc" value="2011-01-01" /> <Field name="xyz" value="" /> <Field name="tuv" value="123.456" /> </Fields> I'm trying to use Linq-to-XML to get the values from these fields. The values can be of type Decimal, DateTime, String and Int32. I was able to get the fields one by one using a relatively simple query. For example, I'm getting the 'value' from the field with the name 'abc' using the following: private DateTime GetValueFromAttribute(IEnumerable<XElement> fields, String attName) { return (from field in fields where field.Attribute("name").Value == "abc" select (DateTime)field.Attribute("value")).FirstOrDefault() } this is placed in a separate function that simply returns this value, and everything works fine (since I know that there is only one element with the name attribute set to 'abc'). however, since I have to do this for decimals and integers and dates, I was wondering if I can make a generic function that works in all cases. this is where I got stuck. here's what I have so far: private T GetValueFromAttribute<T>(IEnumerable<XElement> fields, String attName) { return (from field in fields where field.Attribute("name").Value == attName select (T)field.Attribute("value").Value).FirstOrDefault(); } this doesn't compile because it doesn't know how to convert from String to T. I tried boxing and unboxing (i.e. select (T) (Object) field.Attribute("value").Value but that throws a runtime Specified cast is not valid exception as it's trying to convert the String to a DateTime, for instance. Is this possible in a generic function? can I put a constraint on the generic function to make it work? or do I have to have separate functions to take advantage of Linq-to-XML's explicit cast operators?

    Read the article

  • Type of member is not CLS-compliant

    - by John Galt
    Using Visual Studio 2008 and VB.Net: I have a working web app that uses an ASMX web service which is compiled into its separate assembly. I have another class library project compiled as a separate assembly that serves as a proxy to this web service. This all seems to work at runtime but I am getting this warning at compile time which I don't understand and would like to fix: Type of member 'wsZipeee' is not CLS-compliant I have dozens of webforms in the main project that reference the proxy class with no compile time complaints as this snippet shows: Imports System.Data Partial Class frmZipeee Inherits System.Web.UI.Page Public wsZipeee As New ProxyZipeeeService.WSZipeee.Zipeee Dim dsStandardMsg As DataSet Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load And yet I have one webform (also in the root of the main project) which gives me the "not CLS-compliant" message but yet attempts to reference the proxy class just like the other ASPX files. I get the compile time warning on the line annoted by me with 'ERROR here.. Imports System.Data Partial Class frmHome Inherits System.Web.UI.Page Public wsZipeee As New ProxyZipeeeService.WSZipeee.Zipeee ERROR here Dim dsStandardMsg As DataSet Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load This makes no sense to me. The file with the warning is called frmHome.aspx.vb; all others in the project declare things the same way and have no warning. BTW, the webservice itself returns standard datatypes: integer, string, and dataset.

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • Inversion of control domain objects construction problem

    - by Andrey
    Hello! As I understand IoC-container is helpful in creation of application-level objects like services and factories. But domain-level objects should be created manually. Spring's manual tells us: "Typically one does not configure fine-grained domain objects in the container, because it is usually the responsibility of DAOs and business logic to create/load domain objects." Well. But what if my domain "fine-grained" object depends on some application-level object. For example I have an UserViewer(User user, UserConstants constants) class. There user is domain object which cannot be injected, but UserViewer also needs UserConstants which is high-level object injected by IoC-container. I want to inject UserConstants from the IoC-container, but I also need a transient runtime parameter User here. What is wrong with the design? Thanks in advance! UPDATE It seems I was not precise enough with my question. What I really need is an example how to do this: create instance of class UserViewer(User user, UserService service), where user is passed as the parameter and service is injected from IoC. If I inject UserViewer viewer then how do I pass user to it? If I create UserViewer viewer manually then how do I pass service to it?

    Read the article

  • Android Signal 11 (SIGSEGV)

    - by Naturjoghurt
    I read many posts here and on other sites, but can not find the problem creating my Error: I use an AsyncTask because I want to easily manipulate the UI Thread before and after Execution. In doInBackground I create a ThreadPoolExecutor and execute Runnables. If I only execute 1 Runnable with the Executor, there is no Problem, but if I execute another Runnable I get following Error: 06-26 18:00:42.288: A/libc(25073): Fatal signal 11 (SIGSEGV) at 0x7f486162 (code=1), thread 25106 (pool-1-thread-2) 06-26 18:00:42.304: D/dalvikvm(25073): GC_CONCURRENT freed 119K, 2% free 8908K/9056K, paused 4ms+4ms, total 45ms 06-26 18:00:42.327: I/System.out(25073): In Check All with Prefix: a and Length: 4 06-26 18:00:42.390: I/DEBUG(126): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 06-26 18:00:42.390: I/DEBUG(126): Build fingerprint: 'google/yakju/maguro:4.2.2/JDQ39/573038:user/release-keys' 06-26 18:00:42.390: I/DEBUG(126): Revision: '9' 06-26 18:00:42.390: I/DEBUG(126): pid: 25073, tid: 25106, name: pool-1-thread-2 >>> de.uni_duesseldorf.cn.distributed_computing2 <<< 06-26 18:00:42.390: I/DEBUG(126): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 7f486162 ... 06-26 18:00:42.538: I/DEBUG(126): memory map around fault addr 7f486162: 06-26 18:00:42.538: I/DEBUG(126): 60292000-60391000 06-26 18:00:42.538: I/DEBUG(126): (no map for address) 06-26 18:00:42.538: I/DEBUG(126): bed14000-bed35000 [stack] I set up the ThreadPoolExecutor like this: // numberOfPackages: Number of Runnables to be executed public void initializeThreadPoolExecutor (int numberOfPackages) { int corePoolSize = Runtime.getRuntime().availableProcessors(); int maxPoolSize = numberOfPackages; long keepAliveTime = 60; final BlockingQueue workingQueue = new LinkedBlockingQueue(); executor = new ThreadPoolExecutor(corePoolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, workingQueue); } I have no clue, why it fails when starting the second Thread. Maybe Memory Leaks? Any Help appreciated. Thanks in Advance

    Read the article

  • VB.net avoiding cross thread exception with extension method

    - by user574632
    Hello I am trying to implement a solution for updating form controls without using a delegate. I am attempting to use the 1st solution on this page: http://www.dreamincode.net/forums/blog/143/entry-2337-handling-the-dreaded-cross-thread-exception/ Imports System.ComponentModel Imports System.Runtime.CompilerServices Public Module MyInvoke <Extension()> _ Public Sub CustomInvoke(Of T As ISynchronizeInvoke)(ByVal control As T, ByVal toPerform As Action(Of T)) If control.InvokeRequired Then control.Invoke(toPerform, New Object() {control}) toPerform(control) End If End Sub End Module The site gives this as example of how to use: Label1.CustomInvoke(l => l.Text = "Hello World!") But i get 'l' is not declared error. As you can see im very new to VB or any OOP. I can get the second solution on that page to work (using delegates) but i have quite a few things to do in this thread and it seems like i would need to write a new delegate sub for each thing, which seems wasteful. What i need to do is select the 1st item from a combobox, update a textbox.text with the selected item, and pass the selected item to a function. Then wait for x seconds and start again, selecting the second item. I can get it to work in a single threaded application, but i need the interface to remain responsive. Any help greatly appreciated. EDIT: OK so changing the syntax worked for the example. However if i change it from Label1.CustomInvoke(Sub(l) l.text = "hello world!") (which worked just fine) to: Dim indexnumber As Integer = 0 ComboBox1.CustomInvoke(Sub(l) l.SelectedIndex = indexnumber) I get a cross threading error as though i didnt even use this method: Cross-thread operation not valid: Control 'ComboBox1' accessed from a thread other than the thread it was created on. So now im back to where i started? Any further help very much appreciated.

    Read the article

  • When is a try catch not a try catch?

    - by Dearmash
    I have a fun issue where during application shutdown, try / catch blocks are being seemingly ignored in the stack. I don't have a working test project (yet due to deadline, otherwise I'd totally try to repro this), but consider the following code snippet. public static string RunAndPossiblyThrow(int index, bool doThrow) { try { return Run(index); } catch(ApplicationException e) { if(doThrow) throw; } return ""; } public static string Run(int index) { if(_store.Contains(index)) return _store[index]; throw new ApplicationException("index not found"); } public static string RunAndIgnoreThrow(int index) { try { return Run(index); } catch(ApplicationException e) { } return ""; } During runtime this pattern works famously. We get legacy support for code that relies on exceptions for program control (bad) and we get to move forward and slowly remove exceptions used for program control. However, when shutting down our UI, we see an exception thrown from "Run" even though "doThrow" is false for ALL current uses of "RunAndPossiblyThrow". I've even gone so far as to verify this by modifying code to look like "RunAndIgnoreThrow" and I'll still get a crash post UI shutdown. Mr. Eric Lippert, I read your blog daily, I'd sure love to hear it's some known bug and I'm not going crazy. EDIT This is multi-threaded, and I've verified all objects are not modified while being accessed

    Read the article

  • Can I have the gcc linker create a static libary?

    - by Lucas Meijer
    I have a library consisting of some 300 c++ files. The program that consumes the library does not want to dynamically link to it. (For various reasons, but the best one is that some of the supported platforms do not support dynamic linking) Then I use g++ and ar to create a static library (.a), this file contains all symbols of all those files, including ones that the library doesn't want to export. I suspect linking the consuming program with this library takes an unnecessary long time, as all the .o files inside the .a still need to have their references resolved, and the linker has more symbols to process. When creating a dynamic library (.dylib / .so) you can actually use a linker, which can resolve all intra-lib symbols, and export only those that the library wants to export. The result however can only be "linked" into the consuming program at runtime. I would like to somehow get the benefits of dynamic linking, but use a static library. If my google searches are correct in thinking this is indeed not possible, I would love to understand why this is not possible, as it seems like something that many c and c++ programs could benefit from.

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Iteration over a linq to sql query is very slow.

    - by devzero
    I have a view, AdvertView in my database, this view is a simple join between some tables (advert, customer, properties). Then I have a simple linq query to fetch all adverts for a customer: public IEnumerable<AdvertView> GetAdvertForCustomerID(int customerID) { var advertList = from advert in _dbContext.AdvertViews where advert.Customer_ID.Equals(customerID) select advert; return advertList; } I then wish to map this to modelItems for my MVC application: public List<AdvertModelItem> GetAdvertsByCustomer(int customerId) { List<AdvertModelItem> lstAdverts = new List<AdvertModelItem>(); List<AdvertView> adViews = _dataHandler.GetAdvertForCustomerID(customerId).ToList(); foreach(AdvertView adView in adViews) { lstAdverts.Add(_advertMapper.MapToModelClass(adView)); } return lstAdverts; } I was expecting to have some performance issues with the SQL, but the problem seems to be with the .ToList() function. I'm using ANTS performance profiler and it reports that the total runtime of the function is 1.400ms, and 850 of those is with the ToList(). So my question is, why does the tolist function take such a long time here?

    Read the article

  • Boost::Thread linking error on OSX?

    - by gct
    So I'm going nuts trying to figure this one out. Here's my basic setup: I'm compiling a shared library with a bunch of core functionality that uses a lot of boost stuff. We'll call this library libpf_core.so. It's linked with the boost static libraries, specifically the python, system, filesystem, thread, and program_options libraries. This all goes swimmingly. Now, I have a little test program called test_socketio which is compiled into a shared library (it's loaded as a plugin at runtime). It uses some boost stuff like boost::bind and boost::thread, and it's linked again libpf_core.so (which has the boost libraries included remember). When I go to compile test_socketio though, out of all my plugins it gives me a linking error: [ Building test_socketio ] g++ -c -pg -g -O0 -I/usr/local/include -I../include test_socketio.cc -o test_socketio.o g++ -shared test_socketio.o -lpy_core -o test_socketio.so Undefined symbols: "boost::lock_error::lock_error()", referenced from: boost::unique_lock<boost::mutex>::lock() in test_socketio.o ld: symbol(s) not found collect2: ld returned 1 exit status And I'm going crazy trying to figure out why this is. I've tried explicitly linking boost::thread into the plugin to no avail, tried ensuring that I'm using the boost headers associated with the libraries linked into libpf_core.so in case there was a conflict there. Is there something OSX specific regarding boost that I'm missing? In my searching on google I've seen a number of other people get this error but no one seems to have come up with a satisfactory solution.

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • JQuery - Set TBODY

    - by Villager
    Hello, I have a table defined as follows: <table id="myTable" cellpadding="0" cellspacing="0"> <thead><tr> <th>Date</th> <th>First Name</th> <th>Last Name</th> </tr></thead> <tbody> <!-- rows will go here --> </tbody> </table> I am trying to dynamically populate 'myTable' at runtime via JavaScript. To accomodate for this, I am using JQuery. I want to write some HTML into the tbody element within 'myTable'. However, I am having problems understanding how to do this with the selectors. I know that I can get 'myTable' using: $("#myTable") I know that I can set the HTML of myTable by using the following: $("#myTable").html(someHtmlString); However, that sets the HTML of the entire table. In reality, I just want to set the HTML within the TBODY of 'myTable'. How do I do this with JQuery? Thank you!

    Read the article

  • Problem with inheritance and List<>

    - by Jagd
    I have an abstract class called Grouping. I have a subclass called GroupingNNA. public class GroupingNNA : Grouping { // blah blah blah } I have a List that contains items of type GroupingNNA, but is actually declared to contain items of type Grouping. List<Grouping> lstGroupings = new List<Grouping>(); lstGroupings.Add( new GroupingNNA { fName = "Joe" }); lstGroupings.Add( new GroupingNNA { fName = "Jane" }); The Problem: The following LINQ query blows up on me because of the fact that lstGroupings is declared as List< Grouping and fName is a property of GroupingNNA, not Grouping. var results = from g in lstGroupings where r.fName == "Jane" select r; Oh, and this is a compiler error, not a runtime error. Thanks in advance for any help on this one! More Info: Here is the actual method that won't compile. The OfType() fixed the LINQ query, but the compiler doesn't like the fact that I'm trying to return the anonymous type as a List< Grouping. private List<Grouping> ApplyFilterSens(List<Grouping> lstGroupings, string fSens) { // This works now! Thanks @Lasse var filtered = from r in lstGroupings.OfType<GroupingNNA>() where r.QASensitivity == fSens select r; if (filtered != null) { **// Compiler doesn't like this now** return filtered.ToList<Grouping>(); } else return new List<Grouping>(); }

    Read the article

  • Are all <canvas> tag dimensions in pixels?

    - by Simon Omega
    Are all tag dimensions in pixels? I am asking because I understood them to be. But my math is broken or I am just not grasping something here. I have been doing python mostly and just jumped back into Java Scripting. If I am just doing something stupid let me know. For a game I am writing, I wanted to have a blocky gradient. I have the following: HTML <canvas id="heir"></canvas> CSS @media screen { body { font-size: 12pt } /* Game Rendering Space */ canvas { width: 640px; height: 480px; border-style: solid; border-width: 1px; } } JavaScript (Shortened) function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 480; i = i + 10 ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, i, 640, 10); myblue = myblue - 2; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } function main () { var targetcontext = document.getElementById(“main”).getContext("2d"); testDraw(targetcontext); } To me this should produce a series of 640w by 10h pixel bars. In Google Chrome and Fire Fox I get 15 bars. To me that means ( 480 / 15 ) is 32 pixel high bars. So I change the code to: function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 16; i++ ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, (i * 10), 640, 10); myblue = myblue - 10; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } And get a true 32 pixel height result for comparison. Other than the fact that the first code snippet has shades of blue rendering in non-visible portions of the they are measuring 32 pixels. Now back to the Original Java Code... If I inspect the tag in Chrome it reports 640 x 480. If I inspect it in Fire Fox it reports 640 x 480. BUT! Fire Fox exports the original code to png at 300 x 150 (which is 15 rows of 10). Is it some how being resized to 640 x 480 by the CSS instead of being set to a true 640 x 480? Why, how, what? O_o I confused...

    Read the article

  • connection between two android phones

    - by user1770346
    I m not able to connect my android device to other device(either android or non-android)via bluetooth.After detecting the devices from my android phone,i m not able to connect it to selected device from the list.The main problem is it not showing connectivity conformation message in selected device from list.How can i recover from this problem. please help me.Thanks My code for searching device is:(BluetoothSearchActivity.java) public class BluetoothSearchActivity extends Activity { ArrayAdapter<String> btArrayAdapter; BluetoothAdapter mBluetoothAdapter; TextView stateBluetooth; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ImageView BluetoothSearchImageView=new ImageView(this); BluetoothSearchImageView.setImageResource(R.drawable.inner1); setContentView(BluetoothSearchImageView); setContentView(R.layout.activity_bluetooth_search); mBluetoothAdapter=BluetoothAdapter.getDefaultAdapter(); ListView listDevicesFound=(ListView) findViewById(R.id.myList); btArrayAdapter=new ArrayAdapter<String> (BluetoothSearchActivity.this,android.R.layout.simple_list_item_1); listDevicesFound.setAdapter(btArrayAdapter); registerReceiver(ActionFoundReceiver,new IntentFilter(BluetoothDevice.ACTION_FOUND)); btArrayAdapter.clear(); mBluetoothAdapter.startDiscovery(); listDevicesFound.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent,View view,int position,long id) { Intent i6=new Intent(getApplicationContext(),AcceptThread.class); startActivity(i6); } }); } private final BroadcastReceiver ActionFoundReceiver=new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { // TODO Auto-generated method stub String action=intent.getAction(); if(BluetoothDevice.ACTION_FOUND.equals(action)) { BluetoothDevice device=intent.getParcelableExtra(BluetoothDevice.EXTRA_DEVICE); btArrayAdapter.add(device.getName()+"\n"+device.getAddress()); btArrayAdapter.notifyDataSetChanged(); Log.d("BluetoothSearchActivity",device.getName()+"\n"+device.getAddress()); } } }; @Override protected void onDestroy() { super.onDestroy(); unregisterReceiver(ActionFoundReceiver); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_bluetooth_search, menu); return true; } } and my connectivity code is:(AcceptThread.java) class ConnectThread extends Thread { private static final UUID MY_UUID=UUID.fromString("fa87c0d0-afac-11de-8a39-0800200c9a66"); private static final String ConnectThread = null; BluetoothSocket mSocket; BluetoothDevice mDevice; BluetoothAdapter mBluetoothAdapter; public ConnectThread(BluetoothDevice device) { BluetoothSocket temp=null; mDevice=device; try{ temp=mDevice.createRfcommSocketToServiceRecord(MY_UUID); }catch(IOException e) { } mSocket=temp; } public void run() { mBluetoothAdapter.cancelDiscovery(); try{ Log.i(ConnectThread,"starting to connect"); mSocket.connect(); }catch(IOException connectException) { Log.e(ConnectThread,"connection Failed"); try{ mSocket.close(); }catch(IOException closeException){ } return; } } public void cancel() { try{ mSocket.close(); }catch(IOException e) { } } } public class AcceptThread extends Thread{ private static final String NAME="BluetoothAcceptThread"; private static final UUID MY_UUID=UUID.fromString("fa87c0d0-afac-11de-8a39-0800200c9a66"); BluetoothServerSocket mServerSocket; BluetoothAdapter mBluetoothAdapter; public AcceptThread() { BluetoothServerSocket temp=null; try{ temp=mBluetoothAdapter.listenUsingRfcommWithServiceRecord(NAME,MY_UUID); }catch(IOException e){ } mServerSocket=temp; } public void run() { BluetoothSocket socket=null; while(true) { try{ socket=mServerSocket.accept(); }catch(IOException e) { break; } if(socket!=null) { try { mServerSocket.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } break; } } } public void cancel() { try{ mServerSocket.close(); }catch(IOException e) { } } }

    Read the article

  • Adapting the width of a Flexs 3 DataGridColumn to the content of its ItemRenderer ?

    - by phtrivier
    I have the following scenario : A Flex 3 DataGrid is sitting here At runtime, a column is added to this grid The column has a custom ItemRenderer The ItemRenderer inherits from HBox, and adds a few items to the HBOx dynamically My problem is that the width of the column doesn't change. As a consequence, my column stays small, and an ugly horizontal scrollbar is displayed in the line, instead of my content (which is completely unreadable). I would like the column to adapt its width to the content of the HBox in the ItemRenderer. I tried the following : Setting the 'percentWidth' of the ItemRenderer to '100' Invalidating the properties of the ItemRenderer after adding the items The only thing that has a "visible" effect it to force the width of the DataGridColumn. Obviously this is not acceptable since I'm dynamically adding components to the ItemRenderer, and I don't know how many or how big they are. Besides, when I am in the ItemRenderer, I have no access to the column itself (or do I ?) so I cannot force the size of the column from here. So is there a way around this ? Would AdvancedDataGrid help here (notwhistanding the fact that I cannot really use it for other reasons ...) Thanks PH

    Read the article

  • Convert enumeration to string

    - by emptyheaded
    I am trying to build a function that converts an item from an enum to its corresponding string. The enums I use are fairly long, so I didn't want to use a switch-case. I found a method using boost::unordered_map very convenient, but I don't know how to make a default return (when there is no item matching the enum). const boost::unordered_map<enum_type, const std::string> enumToString = boost::assign::map_list_of (data_1, "data_1") (data_2, "data_2"); I tried to create an additional function: std::string convert(enum_type entry) { if (enumToString.find(entry)) // not sure what test to place here, return enumToString.at(entry); //because the find method returns an iter else return "invalid_value"; } I even tried something exceedingly wrong: std::string convert(enum_type entry) { try{ return enumToString.at(entry); } catch(...){ return "invalid_value"; } } Result: evil "Debug" runtime error. Can somebody give me a suggestion on how to either 1) find an easier method to convert enum to a string with the same name as the enum item 2) find a way to use already built boost methods to get a default value from a hash map (best option) 3) find what to place in the test to use a function that returns either the pair of the key-value, or a different string if the key is not found in the map. Thank you very much.

    Read the article

< Previous Page | 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350  | Next Page >