Search Results

Search found 20283 results on 812 pages for 'security context'.

Page 753/812 | < Previous Page | 749 750 751 752 753 754 755 756 757 758 759 760  | Next Page >

  • HTTP client - HTTP 405 error "Method not allowed". I send a HTTP Post but for some reason HTTP Get i

    - by Shino88
    Hey I am using apache library. I have created a class which sends a post request to a servlet. I have set up the parameters for the client and i have created a HTTP post object to be sent but for some reason when i excute the request i get a reposnse that says the get method is not supported(which is true cause i have only made a dopost method in my servlet). It seems that a get request is being sent but i dont know why. The post method worked before but i started gettng http error 417 "Expectation Failed" which i fixed by adding paramenters. below is my class with the post method. P.s i am developing for android. public class HTTPrequestHelper { private final ResponseHandler<String> responseHandler; private static final String CLASSTAG = HTTPrequestHelper.class.getSimpleName(); private static final DefaultHttpClient client; static{ HttpParams params = new BasicHttpParams(); params.setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); params.setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, HTTP.UTF_8); ///params.setParameter(CoreProtocolPNames.USER_AGENT, "Android-x"); params.setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, 15000); params.setParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK, false); SchemeRegistry schemeRegistry = new SchemeRegistry(); schemeRegistry.register( new Scheme("http", PlainSocketFactory.getSocketFactory(), 80)); schemeRegistry.register( new Scheme("https", SSLSocketFactory.getSocketFactory(), 443)); ThreadSafeClientConnManager cm = new ThreadSafeClientConnManager(params, schemeRegistry); client = new DefaultHttpClient(cm,params); } public HTTPrequestHelper(ResponseHandler<String> responseHandler) { this.responseHandler = responseHandler; } public void performrequest(String url, String para) { HttpPost post = new HttpPost(url); StringEntity parameters; try { parameters = new StringEntity(para); post.setEntity(parameters); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } BasicHttpResponse errorResponse = new BasicHttpResponse( new ProtocolVersion("HTTP_ERROR", 1, 1), 500, "ERROR"); try { client.execute(post, this.responseHandler); } catch (Exception e) { errorResponse.setReasonPhrase(e.getMessage()); try { this.responseHandler.handleResponse(errorResponse); } catch (Exception ex) { Log.e( "ouch", "!!! IOException " + ex.getMessage() ); } } } I tried added the allow header to the request but that did not work as well but im not sure if i was doing right. below is the code. client.addRequestInterceptor(new HttpRequestInterceptor() { @Override public void process(HttpRequest request, HttpContext context) throws HttpException, IOException { //request.addHeader("Allow", "POST"); } });

    Read the article

  • To connect Gstreamer with Qt in order to play a gstreamer video in the Qt Widget

    - by raggio
    I tried using phonon to play the video but could not succeed. Off-late came to know through the Qt forums that even the latest version of Qt does not support phonon. Thats when i started using Gstreamer.Any suggestions as to how to connect the Gstreamer window with the Qt widget?My aim is to play a video using Gstreamer on the Qt widget.So how do i link the gstreamer window and the Qt widget? I am successful in getting the Id of the widget through winid(). Further with the help of Gregory Pakosz, I have added the below 2 lines of code in my application - QApplication::syncX(); gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId()); However am not able to link the Qt widget with the gstreamer video window. This is what my sample code would look like :- int main(int argc, char *argv[]) { printf("winid=%d\n", w.winId()); gst_init (NULL,NULL); /* create a new bin to hold the elements */ bin = gst_pipeline_new ("pipeline"); /* create a disk reader */ filesrc = gst_element_factory_make ("filesrc", "disk_source"); g_assert (filesrc); g_object_set (G_OBJECT (filesrc), "location", "PATH_TO_THE_EXECUTABLE", NULL); demux = gst_element_factory_make ("mpegtsdemux", "demuxer"); if (!demux) { g_print ("could not find plugin \"mpegtsmux\""); return -1; } vdecoder = gst_element_factory_make ("mpeg2dec", "decode"); if (!vdecoder) { g_print ("could not find plugin \"mpeg2dec\""); return -1; } videosink = gst_element_factory_make ("xvimagesink", "play_video"); g_assert (videosink); /* add objects to the main pipeline */ gst_bin_add_many (GST_BIN (bin), filesrc, demux, vdecoder, videosink, NULL); /* link the elements */ gst_element_link_many (filesrc, demux, vdecoder, videosink, NULL); gst_element_set_state(videosink, GST_STATE_READY); QApplication::syncX(); gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(videosink), w.winId()); /* start playing */ gst_element_set_state (bin, GST_STATE_PLAYING); } Could you explain more in detail about the usage of gst_x_overlay_set_xwindow_id() wrt my context? Could i get any hint as to how i can integrate gstreamer under Qt? Please help me solve this problem

    Read the article

  • How to diagnose cause, fix, or work around Adobe ActiveX / COM related error 0x80004005 progmaticall

    - by Streamline
    I've built a C# .NET app that uses the Adobe ActiveX control to display a PDF. It relies on a couple DLLs that get shipped with the application. These DLLs interact with the locally installed Adobe Acrobat or Adobe Acrobat Reader installed on the machine. This app is being used by some customer already and works great for nearly all users ( I check to see that the local machine is running at least version 9 of either Acrobat or Reader already ). I've found 3 cases where the app returns the error message "Error HRESULT E_FAIL has been returned from a call to a COM component" when trying to load (when the activex control is loading). I've checked one of these user's machines and he has Acrobat 9 installed and is using it frequently with no problems. It does appear that Acrobat 7 and 8 were installed at one time since there are entries for them in the registry along with Acrobat 9. I can't reproduce this problem locally, so I am not sure exactly which direction to go. The error at the top of the stacktrace is: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component. Some research into this error indicates it is a registry problem. Does anyone have a clue as to how to fix or work around this problem, or determine how to get to the core root of the problem? The full content of the error message is this: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component.    at System.Windows.Forms.UnsafeNativeMethods.CoCreateInstance(Guid& clsid, Object punkOuter, Int32 context, Guid& iid)    at System.Windows.Forms.AxHost.CreateWithoutLicense(Guid clsid)    at System.Windows.Forms.AxHost.CreateWithLicense(String license, Guid clsid)    at System.Windows.Forms.AxHost.CreateInstanceCore(Guid clsid)    at System.Windows.Forms.AxHost.CreateInstance()    at System.Windows.Forms.AxHost.GetOcxCreate()    at System.Windows.Forms.AxHost.TransitionUpTo(Int32 state)    at System.Windows.Forms.AxHost.CreateHandle()    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.AxHost.EndInit()    at AcrobatChecker.Viewer.InitializeComponent()    at AcrobatChecker.Viewer..ctor()    at AcrobatChecker.Form1.btnViewer_Click(Object sender, EventArgs e)    at System.Windows.Forms.Control.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)    at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)    at System.Windows.Forms.Control.WndProc(Message& m)    at System.Windows.Forms.ButtonBase.WndProc(Message& m)    at System.Windows.Forms.Button.WndProc(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)    at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • Unauthorized Sharepoint WSDL from Coldfusion 8

    - by antony.trupe
    How do I solve the error: Unable to read WSDL from URL: https://workflowtest.site.edu/_vti_bin/Lists.asmx?WSDL. Error: 401 Unauthorized. I can successfully view the WSDL from the browser using the same user account. I'm not sure which authentication is being used (Basic or Integrated). How would I find that out? The code making the call is: <cfinvoke username="username" password="password" webservice="https://workflowtest.liberty.edu/_vti_bin/Lists.asmx?WSDL" method="GetList" listName="{CB02EB71-392E-4906-B512-8EC002F72436}" > The impression I get is that coldfusion doesn't like being made to authenticate to get the WSDL. Full stack trace: coldfusion.xml.rpc.XmlRpcServiceImpl$CantFindWSDLException: Unable to read WSDL from URL: https://workflowtest.liberty.edu/_vti_bin/Lists.asmx?WSDL. at coldfusion.xml.rpc.XmlRpcServiceImpl.retrieveWSDL(XmlRpcServiceImpl.java:709) at coldfusion.xml.rpc.XmlRpcServiceImpl.access$000(XmlRpcServiceImpl.java:53) at coldfusion.xml.rpc.XmlRpcServiceImpl$1.run(XmlRpcServiceImpl.java:239) at java.security.AccessController.doPrivileged(Native Method) at coldfusion.xml.rpc.XmlRpcServiceImpl.registerWebService(XmlRpcServiceImpl.java:232) at coldfusion.xml.rpc.XmlRpcServiceImpl.getWebService(XmlRpcServiceImpl.java:496) at coldfusion.xml.rpc.XmlRpcServiceImpl.getWebServiceProxy(XmlRpcServiceImpl.java:450) at coldfusion.tagext.lang.InvokeTag.doEndTag(InvokeTag.java:413) at coldfusion.runtime.CfJspPage._emptyTcfTag(CfJspPage.java:2662) at cftonytest2ecfm1787185330.runPage(/var/www/webroot/tonytest.cfm:16) at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:196) at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:370) at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65) at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279) at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48) at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40) at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70) at coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:74) at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28) at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22) at coldfusion.CfmServlet.service(CfmServlet.java:175) at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89) at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42) at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46) at jrun.servlet.FilterChain.doFilter(FilterChain.java:94) at jrun.servlet.FilterChain.service(FilterChain.java:101) at jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106) at jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42) at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:286) at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543) at jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203) at jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:320) at jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428) at jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:266) at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)

    Read the article

  • iPhone OpenGL ES: How do I use gravity vector to correctly transform scene for augmented reality

    - by gpdawson
    I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass). The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project. Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes? //Clear matrix to be used to rotate from the current referential to one based on the gravity vector bzero(matrix, sizeof(matrix)); matrix[3][3] = 1.0; //Setup first matrix column as gravity vector matrix[0][0] = accel[0] / length; matrix[0][1] = accel[1] / length; matrix[0][2] = accel[2] / length; //Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1 matrix[1][0] = 0.0; matrix[1][1] = 1.0; matrix[1][2] = -accel[1] / accel[2]; length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]); matrix[1][0] /= length; matrix[1][1] /= length; matrix[1][2] /= length; //Setup third matrix column as the cross product of the first two matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1]; matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0]; matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0]; //Finally load matrix glMultMatrixf((GLfloat*)matrix);

    Read the article

  • Unable to resolve class in build.gradle using Android Studio 0.60/Gradle 0.11

    - by saywhatnow
    Established app working fine using Android Studio 0.5.9/ Gradle 0.9 but upgrading to Android Studio 0.6.0/ Gradle 0.11 causes the error below. Somehow Studio seems to have lost the ability to resolve the android tools import at the top of the build.gradle file. Anyone got any ideas on how to solve this? build file 'Users/[me]/Repositories/[project]/[module]/build.gradle': 1: unable to resolve class com.android.builder.DefaultManifestParser @ line 1, column 1. import com.android.builder.DefaultManifestParser 1 error at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:302) at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:858) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:548) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:497) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:306) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:287) at org.gradle.groovy.scripts.internal.DefaultScriptCompilationHandler.compileScript(DefaultScriptCompilationHandler.java:115) ... 77 more 2014-06-09 10:15:28,537 [ 92905] INFO - .BaseProjectImportErrorHandler - Failed to import Gradle project at '/Users/[me]/Repositories/[project]' org.gradle.tooling.BuildException: Could not run build action using Gradle distribution 'http://services.gradle.org/distributions/gradle-1.12-all.zip'. at org.gradle.tooling.internal.consumer.ResultHandlerAdapter.onFailure(ResultHandlerAdapter.java:53) at org.gradle.tooling.internal.consumer.async.DefaultAsyncConsumerActionExecutor$1$1.run(DefaultAsyncConsumerActionExecutor.java:57) at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64) [project]/[module]/build.gradle import com.android.builder.DefaultManifestParser apply plugin: 'android-sdk-manager' apply plugin: 'android' android { sourceSets { main { manifest.srcFile 'src/main/AndroidManifest.xml' res.srcDirs = ['src/main/res'] } debug { res.srcDirs = ['src/debug/res'] } release { res.srcDirs = ['src/release/res'] } } compileSdkVersion 19 buildToolsVersion '19.0.0' defaultConfig { minSdkVersion 14 targetSdkVersion 19 } signingConfigs { release } buildTypes { release { runProguard false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt' signingConfig signingConfigs.release applicationVariants.all { variant -> def file = variant.outputFile def manifestParser = new DefaultManifestParser() def wmgVersionCode = manifestParser.getVersionCode(android.sourceSets.main.manifest.srcFile) println wmgVersionCode variant.outputFile = new File(file.parent, file.name.replace("-release.apk", "_" + wmgVersionCode + ".apk")) } } } packagingOptions { exclude 'META-INF/LICENSE.txt' exclude 'META-INF/NOTICE.txt' } } def Properties props = new Properties() def propFile = file('signing.properties') if (propFile.canRead()){ props.load(new FileInputStream(propFile)) if (props!=null && props.containsKey('STORE_FILE') && props.containsKey('STORE_PASSWORD') && props.containsKey('KEY_ALIAS') && props.containsKey('KEY_PASSWORD')) { println 'RELEASE BUILD SIGNING' android.signingConfigs.release.storeFile = file(props['STORE_FILE']) android.signingConfigs.release.storePassword = props['STORE_PASSWORD'] android.signingConfigs.release.keyAlias = props['KEY_ALIAS'] android.signingConfigs.release.keyPassword = props['KEY_PASSWORD'] } else { println 'RELEASE BUILD NOT FOUND SIGNING PROPERTIES' android.buildTypes.release.signingConfig = null } }else { println 'RELEASE BUILD NOT FOUND SIGNING FILE' android.buildTypes.release.signingConfig = null } repositories { maven { url 'https://repo.commonsware.com.s3.amazonaws.com' } maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' } } dependencies { compile 'com.github.gabrielemariotti.changeloglib:library:1.4.+' compile 'com.google.code.gson:gson:2.2.4' compile 'com.google.android.gms:play-services:+' compile 'com.android.support:appcompat-v7:+' compile 'com.squareup.okhttp:okhttp:1.5.+' compile 'com.octo.android.robospice:robospice:1.4.11' compile 'com.octo.android.robospice:robospice-cache:1.4.11' compile 'com.octo.android.robospice:robospice-retrofit:1.4.11' compile 'com.commonsware.cwac:security:0.1.+' compile 'com.readystatesoftware.sqliteasset:sqliteassethelper:+' compile 'com.android.support:support-v4:19.+' compile 'uk.co.androidalliance:edgeeffectoverride:1.0.1+' compile 'de.greenrobot:eventbus:2.2.1+' compile project(':captureActivity') compile ('de.keyboardsurfer.android.widget:crouton:1.8.+') { exclude group: 'com.google.android', module: 'support-v4' } compile files('libs/CWAC-LoaderEx.jar') }

    Read the article

  • Ajax comments form in ASP.NET MVC2

    - by Artiom Chilaru
    I've been playing around with different aspects of MVC for some time now, and I've reached a situation where I'm not sure what would be the best way to solve a problem. I'm hoping that the SO community will help me out here :P I've seen a number of examples of Ajax.BeginForm on the internet, and it seems like a very nifty idea. E.g. you have a dropdown where you select a customer - and on selecting one it will load this client's details in some placeholder on the page. This works perfectly fine. But what to do if you want to tie in some validation in the box? Just hypothetically, imagine an article page, and user comments in the bottom. Below the comments area there's an ajax-y "Add comment" box. When a user adds a comment, it will appear in the comments area, below the last comment there. If I set the Ajax.BeginForm to Append the result of the call to the Comments area, it will work fine. But what if the data posted is not valid? Instead of appending a "successful" comment to the comments area I have to show the user validation errors. At this point I decided that the area INSIDE the Ajax.BeginForm will be inside a partial, and the form's submits will return this partial. Validation works fine. On each submit we reload the contents inside the form element. But how to add the successful comment to the top? Other things to consider: The comment form also has a "Preview" button. When the user clicks on Preview, I should load the rendered comment into a preview box. This will probably be inside the form area as well. I was thinking of using Json results instead. When the user submits the form, the server code will generate a Json object with a Success value, and html rendered partials as some properties. Something like { "success": true, "form": "<html form data>", "comment": "successful comment html to inject into the page" } This would be a perfect solution, except there's no way in MVC to render a partial into a string, inside the controller (separation of context, remember?). So.. what should I do then? Any "correct" way to implement this?

    Read the article

  • GLFW 3 initialized, yet not?

    - by mSkull
    I'm struggling with creating a window with the GLFW 3 function, glfwCreateWindow. I have set an error callback function, that pretty much just prints out the error number and description, and according to that the GLFW library have not been initialized, even though the glfwInit function just returned success? Here's an outtake from my code // Error callback function prints out any errors from GFLW to the console static void error_callback( int error, const char *description ) { cout << error << '\t' << description << endl; } bool Base::Init() { // Set error callback /*! * According to the documentation this can be use before glfwInit, * and removing won't change anything anyway */ glfwSetErrorCallback( error_callback ); // Initialize GLFW /*! * This return succesfull, but... */ if( !glfwInit() ) { cout << "INITIALIZER: Failed to initialize GLFW!" << endl; return false; } else { cout << "INITIALIZER: GLFW Initialized successfully!" << endl; } // Create window /*! * When this is called, or any other glfw functions, I get a * "65537 The GLFW library is not initialized" in the console, through * the error_callback function */ window = glfwCreateWindow( 800, 600, "GLFW Window", NULL, NULL ); if( !window ) { cout << "INITIALIZER: Failed to create window!" << endl; glfwTerminate(); return false; } // Set window to current context glfwMakeContextCurrent( window ); ... return true; } And here's what's printed out in the console INITIALIZER: GLFW Initialized succesfully! 65537 The GLFW library is not initialized INITIALIZER: Failed to create window! I think I'm getting the error because of the setup isn't entirely correct, but I've done the best I can with what I could find around the place I downloaded the windows 32 from glfw.org and stuck the 2 includes files from it into minGW/include/GLFW, the 2 .a files (from the lib-mingw folder) into minGW/lib and the dll, also from the lib-mingw folder, into Windows/System32 In code::blocks I have, from build options - linker settings, linked the 2 .a files from the download. I believe I need to link more things, but I can figure out what, or where I should get those things from.

    Read the article

  • Entity Framework SaveChanges error details

    - by Marek Karbarz
    When saving changes with SaveChanges on a data context is there a way to determine which Entity causes an error? For example, sometimes I'll forget to assign a date to a non-nullable date field and get "Invalid Date Range" error, but I get no information about which entity or which field it's caused by (I can usually track it down by painstakingly going through all my objects, but it's very time consuming). Stack trace is pretty useless as it only shows me an error at the SaveChanges call without any additional information as to where exactly it happened. Note that I'm not looking to solve any particular problem I have now, I would just like to know in general if there's a way to tell which entity/field is causing a problem. Quick sample of a stack trace as an example - in this case an error happened because CreatedOn date was not set on IAComment entity, however it's impossible to tell from this error/stack trace [SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.] System.Data.SqlTypes.SqlDateTime.FromTimeSpan(TimeSpan value) +2127345 System.Data.SqlTypes.SqlDateTime.FromDateTime(DateTime value) +232 System.Data.SqlClient.MetaType.FromDateTime(DateTime dateTime, Byte cb) +46 System.Data.SqlClient.TdsParser.WriteValue(Object value, MetaType type, Byte scale, Int32 actualLength, Int32 encodingByteSize, Int32 offset, TdsParserStateObject stateObj) +4997789 System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc) +6248 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +987 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +32 System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +141 System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +12 System.Data.Common.DbCommand.ExecuteReader(CommandBehavior behavior) +10 System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues) +8084396 System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) +267 [UpdateException: An error occurred while updating the entries. See the inner exception for details.] System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) +389 System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) +163 System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) +609 IADAL.IAController.Save(IAHeader head) in C:\Projects\IA\IADAL\IAController.cs:61 IA.IAForm.saveForm(Boolean validate) in C:\Projects\IA\IA\IAForm.aspx.cs:198 IA.IAForm.advance_Click(Object sender, EventArgs e) in C:\Projects\IA\IA\IAForm.aspx.cs:287 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +118 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +112 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +5019

    Read the article

  • ActiveDirectoryMembershipProvider and ADAM (or AD LDS) and SetPassword

    - by Iulian
    By the subject line it seems to be a rather broad subject and I need some help here. Basically what I want is to use ActiveDirectoryMembershipProvider with an ADAM instance to authenticate users in an ASP.NET web application. My development environment is a windows 7 machine with an AD LDS instance on it whilst the QA server is a Windows 2003 server with an ADAM instance on it. I have all the required users on both instances plus one with adminsitrator role (CN=Admin,CN=xxx,DC=xxx,C=xx) which I want to use as the connection user. Using connectonProtecton="None" connectionUsername="CN=Admin,CN=xxx,DC=xxx,C=xx" connectionPassword="xxx" I am able to authenticate on both environments (dev & qa). If I change to the connectionProtection to "Secure" I am not able to authenticate anymore; the error I get is "Parser Error Message: Unable to establish secure connection with the server" To me it sounds wrong to use connectionProtection="None" although I found on the net a lot of samples using this setting. Can I use connectionProtection="Secure" to connect to an ADAM instance using an account defined on that instance having Administrator role? What other choices do I have (like using an domain account)? What if my machine where I am to deploy the application is not a part of the domain, will this affect in any way the behavior? I am novice in the respect so I would really appreciate some clear answers or some directions as where to look? Now beside the "signing in" feature of the ActiveDirectoryMembershipProvider I also want to add an extra one, which is setting the password without knowing the old one (something that will be used by a "reset password" feature). So I added a couple of extension methods to the provider, and used System.DirectoryServices classes like DirectoryEntry and the like. When creating a directory entry I use the same credentials provided in web.config for the provider minus the AuthenticationType as I don't know what is right combination of the flags that corresponds to None/Secure. I am able to use Invoke "SetPassword" with ADS_OPTION_PASSWORD_METHOD option as ADS_PASSWORD_ENCODE_CLEAR on my dev machine (w/ AD LDS instance); nevertheless on qa environment (w/ ADAM instance) I am getting an error like "Exception Details: System.DirectoryServices.DirectoryServicesCOMException: An operations error occurred. (Exception from HRESULT: 0x80072020)" I am quite sure it is not about AD LDS vs ADAM but probably another configuration / permission issue. So can anyone help me with some hints on how to use this SetPassword feature? And as a general question what are the best practices when it comes to using ADAM regarding security, programming etc? Thanks in advance Iulian

    Read the article

  • Snow Leopard sqlite3-ruby install problem

    - by JZ
    UPDATE 3/20/10 I'm running Mac OSX Snow Leopard, this problem is caused by a recent train wreck in which I updated ruby without RVM. I've attempted to properly install/run RVM, however I can't get it to work. I am unable to install the sqlite3-ruby gem. I get the following ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. How do I fix this? justin-zollarss-mac-pro:~ justinz$ rails -v Rails 2.3.5 justin-zollarss-mac-pro:~ justinz$ ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin10.2.0] justin-zollarss-mac-pro:~ justinz$ gem -v 1.3.5 justin-zollarss-mac-pro:~ justinz$ which gem /usr/local/bin/gem justin-zollarss-mac-pro:~ justinz$ whereis gem /usr/bin/gem justin-zollarss-mac-pro:~ justinz$ which ruby /usr/local/bin/ruby justin-zollarss-mac-pro:~ justinz$ whereis ruby /usr/bin/ruby justin-zollarss-mac-pro:~ justinz$ which rails /usr/local/bin/rails justin-zollarss-mac-pro:~ justinz$ whereis rails /usr/bin/rails justin-zollarss-mac-pro:~ justinz$ gem list *** LOCAL GEMS *** actionmailer (2.3.5) actionpack (2.3.5) activerecord (2.3.5) activeresource (2.3.5) activesupport (2.3.5) builder (2.1.2) bundler (0.9.11) columnize (0.3.1) erubis (2.6.5) fastercsv (1.5.1) ffi (0.6.3) gbarcode (0.98.16) i18n (0.3.5) linecache (0.43) mail (2.1.3) memcache-client (1.8.0) prawn (0.8.4) prawn-core (0.8.4) prawn-layout (0.8.4) prawn-security (0.8.4) rack (1.1.0, 1.0.1) rack-mount (0.6.1) rack-test (0.5.3) rails (2.3.5) rake (0.8.7) ruby-debug (0.10.3) ruby-debug-base (0.10.3) rubygems-update (1.3.6) sqlite3 (0.0.8) text-format (1.0.0) thor (0.13.4) tzinfo (0.3.17) justin-zollarss-mac-pro:~ justinz$ sudo gem install sqlite3-ruby Password: Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for fdatasync() in -lrt... no checking for sqlite3.h... yes checking for sqlite3_open() in -lsqlite3... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --with-rtlib --without-rtlib --with-sqlite3lib --without-sqlite3lib Gem files will remain installed in /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5 for inspection. Results logged to /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/ext/sqlite3_api/gem_make.out

    Read the article

  • Debugging XSLT with extension objects in Visual Studio 2010

    - by Alex Ciminian
    I'm currently working on a project that involves a lot of XSLT transformations and I really need a debugger (I have XSLTs that are 1000+ lines long and I didn't write them :-). The project is written in C# and makes use of extension objects: xslArg.AddExtensionObject("urn:<obj>", new <Obj>()); From my knowledge, in this situation Visual Studio is the only tool that can help me debug the transformations step-by-step. The static debugger is no use because of the extension objects (it throws an error when it reaches elements that reference their namespace). Fortunately, I've found this thread which gave me a starting point (at least I know it can be done). After searching MSDN, I found the criteria that makes stepping into the transform possible. They are listed here. In short: the XML and the XSLT must be loaded via a class that has the IXmlLineInfo interface (XmlReader & co.) the XML resolver used in the XSLTCompiledTransform constructor is file-based (XmlUriResolver should work). the stylesheet should be on the local machine or on the intranet (?) From what I can tell, I fit all these criteria, but it still doesn't work. The relevant code samples are posted below: // [...] xslTransform = new XslCompiledTransform(true); xslTransform.Load(XmlReader.Create(new StringReader(contents)), null, new BaseUriXmlResolver(xslLocalPath)); // [...] // I already had the xml loaded in an xmlDocument // so I have to convert to an XmlReader XmlTextReader r = new XmlTextReader(new StringReader(xmlDoc.OuterXml)); XsltArgumentList xslArg = new XsltArgumentList(); xslArg.AddExtensionObject("urn:[...]", new [...]()); xslTransform.Transform(r, xslArg, context.Response.Output); I really don't get what I'm doing wrong. I've checked the interfaces on both XmlReader objects and they implement the required one. Also, BaseUriXmlResolver inherits from XmlUriResolver and the stylesheet is stored locally. The screenshot below is what I get when stepping into the Transform function. First I can see the stylesheet code after stepping through the parameters (on template-match), I get this: If anyone has any idea why it doesn't work or has an alternative way of getting it to work I'd be much obliged :). Thanks, Alex

    Read the article

  • Replacing UISplitViewController. New version isn't added for current orientation, frame size

    - by Justin Williams
    My application has two different modes I am switching between by replacing the two views in a UISplitViewController. This involves instantiating the new views and a new UISplitViewController. I then set the new detail view as the UISplitViewController's delegate. I'm running into an issue where when I replace the view controllers and splitview controller, they are not properly sized or added for the current orientation. For instance, if I have my iPad in UIDeviceOrientationPortraitUpsideDown the view will be upside down when I call addSubview, but will rotate to the proper orientation after a second. In another instance, if I have the device in landscape mode and swap the views, the detail view controller is not fully stretched to the size of the view. If I rotate the device to portrait and back to landscape, its resized properly. The code I'm using to create the new split view and view controllers is as follows. - (void)showNotes { teiphoneAppDelegate *appDelegate = (teiphoneAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate.splitViewController.view removeFromSuperview]; OMSavedScrapsController *notesViewController = [[OMSavedScrapsController alloc] initWithNibName:@"SavedScraps" bundle:nil]; UINavigationController *navController = [[UINavigationController alloc] initWithRootViewController:notesViewController]; ComposeViewController *noteDetailViewController = [[ComposeViewController alloc] initWithNibName:@"ComposeView-iPad" bundle:nil]; UISplitViewController *newSplitVC = [[UISplitViewController alloc] init]; newSplitVC.viewControllers = [NSArray arrayWithObjects:navController, noteDetailViewController, nil]; newSplitVC.delegate = noteDetailViewController; appDelegate.splitViewController = newSplitVC; // Fade in the new split view appDelegate.splitViewController.view.alpha = 0.0f; [appDelegate.window addSubview:appDelegate.splitViewController.view]; [appDelegate.window makeKeyAndVisible]; [UIView beginAnimations:nil context:appDelegate.window]; [UIView setAnimationDuration: 0.5f]; [UIView setAnimationCurve: UIViewAnimationCurveEaseInOut]; appDelegate.splitViewController.view.alpha = 1.0f; [UIView commitAnimations]; [notesViewController release]; [navController release]; [noteDetailViewController release]; [newSplitVC release]; } Any suggestions for how to get the new splitview to add for the device's current orientation and frame?

    Read the article

  • How do I work around an HTML Table rendering bug in IE 7?

    - by osmaniac
    I have a table. Some cells span multiple columns. Some cells span multiple rows and columns. But one row (which spans all columns but the rightmost one) creates an artifact. Part of the text in the cell is erroneously repeated left justified on a new row just below the table. I'm baffled. I tried rendering with and without "table-layout: fixed;". Same result. When I originally composed the design using just HTML and CSS, it looked great. But then I worked it into a page and had to add more columns to my master table the right to hold buttons. These buttons are in three groups, each having their own div to control floating and rewrapping when the window gets narrower. One div has another table inside it that groups a single row of buttons. Thus I have table inside div inside td inside outer table. I would prefer a simpler design, but how? This is what I want to have: ................................................................................... . . . . Four rows of data . Three groups of buttons that can reflow . . With several columns . if window gets narrower . . meticulously layed out, . . . That should not resize . . . when window gets narrower . . ................................................................................... . One more row of data spanning the whole screen which stays below the buttons . ................................................................................... What I was doing was putting the three divs with the buttons in the upper right in a single cell that spanned four rows. What other opportunities does CSS offer? The buttons are not allowed to overlap the data on the left or go past the data line below. The original design had the divs with the buttons NOT in a table with the data, but when the window gets narrow, some of the buttons flow such that they go underneath the data, which looks bad. I would post actual HTML, except it is generated by ASP, huge, with lots of CSS styling, and the feature that lets me view the final HTML is not working at the moment. (Built in security in the application.)

    Read the article

  • Download a file with DefaultHTTPClient and preemptive authentication

    - by Nils
    After I had a lot of problems with preemptive authentication , I got it finally working. Now the next problem. I want to get a file with it, but I don't know how. I thought the file data might be in the variable response, but it isn't. Any ideas how this might work? I'm trying it since days without success :( - Basically I'm trying to download an jpeg file, which is on a server protected by prem. auth. // BASIC AUTH /* * ==================================================================== * * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of the Apache Software Foundation. For more * information on the Apache Software Foundation, please see * <http://www.apache.org/>. */ //http://svn.apache.org/repos/asf/httpcomponents/httpclient/branches/4.0.x/httpclient/src/examples/org/apache/http/examples/client/ClientPreemptiveBasicAuthentication.java httpclient = new DefaultHttpClient(); httpclient.getCredentialsProvider().setCredentials( new AuthScope(host, port), new UsernamePasswordCredentials(username, password)); // Generate BASIC scheme object and stick it to the local // execution context BasicHttpContext localcontext = new BasicHttpContext(); BasicScheme basicAuth = new BasicScheme(); localcontext.setAttribute("preemptive-auth", basicAuth); //first request interceptor httpclient.addRequestInterceptor(new PreemptiveAuth(), 0); HttpHost targetHost = new HttpHost(host, port, "http"); //HttpGet httpget = new HttpGet("/"); HttpGet httpget = new HttpGet(http.url); System.out.println("executing request" + httpget.getRequestLine()); /// !!! HttpResponse response = httpclient.execute(targetHost, httpget, localcontext); HttpEntity entity = response.getEntity(); System.out.println("----------------------------------------"); System.out.println("+"+response.getStatusLine()+"+"); ...

    Read the article

  • Cannot get UISearchBar Scope Bar to appear in Toolbar on iPad

    - by Jann
    This is really causing me fits. I put a toolbar on the IUView on the iPad. I added the following: Search Bar (not Search Bar and Search Display) to the toolbar. I set the options to be as follows: Show Cancel Button, Show Scope Bar, Scope Button Titles are: "Title1" and "Title2" (with Title2's radio button selected). Opaque, Clear Context and Auto Resize are checked. I hooked up the delegate of Search Bar to the "File's Owner" and linked it to IBOutlet theSearchBar. In my viewWillAppear I have the following: [theSearchBar setScopeButtonTitles:[NSArray arrayWithObjects:@"Near Me",@"Everywhere",nil]]; //Just in case: [theSearchBar setShowsScopeBar:YES]; //doesn't seem to do anything: //[theSearchBar sizeToFit]; searchDisplayController = [[UISearchDisplayController alloc] initWithSearchBar:theSearchBar contentsController:self]; [self setSearchDisplayController:searchDisplayController]; [searchDisplayController setDelegate:self]; [searchDisplayController setSearchResultsDataSource:self]; //again--does not seem to do anything..but people have suggested it: [theSearchBar sizeToFit]; Okay, so far, I thought, so good. So, I made the File's Owner .m file to be a delegate for: UISearchBarDelegate, UISearchDisplayDelegate. My issue: I have yet to implement the delegates necessary to do the search but still... shouldn't I be seeing the scopeBar next to the search field when I click into the search field? Just so you know I DO see the log of the characters I type, so the delegate is working. I have the following dummy functions in the .m file (just in case) // called when keyboard search button pressed - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar { NSLog(@"Search Button Clicked\n"); [theSearchBar resignFirstResponder]; } // called when cancel button pressed - (void)searchBarCancelButtonClicked:(UISearchBar *)searchBar { NSLog(@"Cancel Button Clicked\n"); [theSearchBar resignFirstResponder]; } - (void)searchBar:(UISearchBar *)searchBar textDidChange:(NSString *)searchText { NSLog(@"Search Text So Far: '%@'\n",searchText); } - (BOOL)searchBarShouldBeginEditing:(UISearchBar *)searchBar { return YES; } - (BOOL)searchBarShouldEndEditing:(UISearchBar *)searchBar { return YES; } Why doesn't the Scope Bar appear? A results UIPopoverController appears with the title "Results" and "No results found" (of course) when i type the first character in my search...but no scope bar. (not that i expect anything other than "No Results Found". I am wondering where the scope bar is supposed to appear...in the titleView of the UIPopover? In the toolbar to the right of the search area? Where?

    Read the article

  • Howto - Running Redmine on mongrel as a service on windows

    - by Achilles
    I use Redmine on Mongrel as a project manager and I use a batch file (start-redmine.bat) to start the redmine in mongrel. There are 2 issues with my setup: 1. I have a running IIS on the server that occupies the HTTP port (80) 2. The start-redmine.bat must be periodically checked to see if it's stopped after a restart that is caused by windows update service. for the first issue, I have no choice but running mongrel on a port like 3000 and for the second issue I have to create a windows service that runs automatically in the background when the windows starts; and here comes the trouble! There are at least 3 ways to run redmine as a service that I'm aware of; none of them can satisfy a performance view on this subject. you may read about them on http://stackoverflow.com/questions/877943/how-to-configure-a-rails-app-redmine-to-run-as-a-service-on-windows I tried them all. The easiest way to setup such a service is using mongrel_service approach; in 3 lines of command you're done. but the performance is significantly lower than running that batch file... Now, I wanna show you my approach: First suppose we have ruby installed into c:\ruby and we have issued the command gem install mongrel to get the mongrel gem installed into c:\ruby\bin Also, suppose we have installed the Redmine into a folder like c:\redmine; and we have ruby's path (i.e. c:\ruby\bin) in our PATH environment variable. Now Download and install the Windows NT Resource Kit Tools from microsoft website. Open the command-line tool that comes with the Resource Kit (from start menu). Use instsrv to install a dummy service called Redmine using the following command: "[path-to-instsrv.exe]\instsrv" Redmine "[path-to-srvany.exe]\srvany.exe" in my case (which is the default case) it was something like this: "C:\Program Files\Windows Resource Kits\Tools\instsrv" Redmine "C:\Program Files\Windows Resource Kits\Tools\srvany.exe" Now create the batch file. Open a notepad and paste these instructions into it and then save it as "c:\redmine\start-redmine.bat" @echo off cd c:\redmine\ mongrel_rails start -a 0.0.0.0 -p 3000 -e production Now we need to configure that dummy service we had created before. WATCH OUT WHAT YOU'RE DOING FROM HERE ON, OR YOU MAY CORRUPT YOUR WINDOWS. To configure that service, open windows registry editor (Start - Run - regedit) and navigate to this node: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Redmine Right-Click on "Redmine" node and using the context menu, create a new key called Parameters (New - Key) Right-Click on "Parameters" and create a String Value property called Application. Do this again and create another String Value called AppParameters. Now Double-click on "Application" and put cmd.exe into "Value data" section. Then Double-click on "AppParameters" and put /C "C:\redmine\start-redmine.bat" into Value data section. We're done! issue this command to run the redmine on mongrel as a service: net start Redmine

    Read the article

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • How do I use the gravity vector to correctly transform scene for augmented reality?

    - by gpdawson
    I'm trying figure out how to get an OpenGL specified object to be displayed correctly according to the device orientation (ie. according to the gravity vector from the accelerometer, and heading from compass). The GLGravity sample project has an example which is almost like this (despite ignoring heading), but it has some glitches. For example, the teapot jumps 180deg as the device viewing angle crosses the horizon, and it also rotates spuriously if you tilt the device from portrait into landscape. This is fine for the context of this app, as it just shows off an object and it doesn't matter that it does these things. But it means that the code just doesn't work when you attempt to emulate real life viewing of an OpenGL object according to the device's orientation. What happens is that it almost works, but the heading rotation you apply from the compass gets "corrupted" by the spurious additional rotations seen in the GLGravity example project. Can anyone provide sample code that shows how to adjust correctly for the device orientation (ie. gravity vector), or to fix the GLGravity example so that it doesn't include spurious heading changes? //Clear matrix to be used to rotate from the current referential to one based on the gravity vector bzero(matrix, sizeof(matrix)); matrix[3][3] = 1.0; //Setup first matrix column as gravity vector matrix[0][0] = accel[0] / length; matrix[0][1] = accel[1] / length; matrix[0][2] = accel[2] / length; //Setup second matrix column as an arbitrary vector in the plane perpendicular to the gravity vector {Gx, Gy, Gz} defined by by the equation "Gx * x + Gy * y + Gz * z = 0" in which we arbitrarily set x=0 and y=1 matrix[1][0] = 0.0; matrix[1][1] = 1.0; matrix[1][2] = -accel[1] / accel[2]; length = sqrtf(matrix[1][0] * matrix[1][0] + matrix[1][1] * matrix[1][1] + matrix[1][2] * matrix[1][2]); matrix[1][0] /= length; matrix[1][1] /= length; matrix[1][2] /= length; //Setup third matrix column as the cross product of the first two matrix[2][0] = matrix[0][1] * matrix[1][2] - matrix[0][2] * matrix[1][1]; matrix[2][1] = matrix[1][0] * matrix[0][2] - matrix[1][2] * matrix[0][0]; matrix[2][2] = matrix[0][0] * matrix[1][1] - matrix[0][1] * matrix[1][0]; //Finally load matrix glMultMatrixf((GLfloat*)matrix);

    Read the article

  • [AS3/C#] Byte encryption ( DES-CBC zero pad )

    - by mark_dj
    Hi there, Currently writing my own AMF TcpSocketServer. Everything works good so far i can send and recieve objects and i use some serialization/deserialization code. Now i started working on the encryption code and i am not so familiar with this stuff. I work with bytes , is DES-CBC a good way to encrypt this stuff? Or are there other more performant/secure ways to send my data? Note that performance is a must :). When i call: ReadAmf3Object with the decrypter specified i get an: InvalidOperationException thrown by my ReadAmf3Object function when i read out the first byte the Amf3TypeCode isn't specified ( they range from 0 to 16 i believe (Bool, String, Int, DateTime, etc) ). I got Typecodes varying from 97 to 254? Anyone knows whats going wrong? I think it has something to do with the encryption part. Since the deserializer works fine w/o the encryption. I am using the right padding/mode/key? I used: http://code.google.com/p/as3crypto/ as as3 encryption/decryption library. And i wrote an Async tcp server with some abuse of the threadpool ;) Anyway here some code: C# crypter initalization code System.Security.Cryptography.DESCryptoServiceProvider crypter = new DESCryptoServiceProvider(); crypter.Padding = PaddingMode.Zeros; crypter.Mode = CipherMode.CBC; crypter.Key = Encoding.ASCII.GetBytes("TESTTEST"); AS3 private static var _KEY:ByteArray = Hex.toArray(Hex.fromString("TESTTEST")); private static var _TYPE:String = "des-cbc"; public static function encrypt(array:ByteArray):ByteArray { var pad:IPad = new NullPad; var mode:ICipher = Crypto.getCipher(_TYPE, _KEY, pad); pad.setBlockSize(mode.getBlockSize()); mode.encrypt(array); return array; } public static function decrypt(array:ByteArray):ByteArray { var pad:IPad = new NullPad; var mode:ICipher = Crypto.getCipher(_TYPE, _KEY, pad); pad.setBlockSize(mode.getBlockSize()); mode.decrypt(array); return array; } C# read/unserialize/decrypt code public override object Read(int length) { object d; using (MemoryStream stream = new MemoryStream()) { stream.Write(this._readBuffer, 0, length); stream.Position = 0; if (this.Decrypter != null) { using (CryptoStream c = new CryptoStream(stream, this.Decrypter, CryptoStreamMode.Read)) using (AmfReader reader = new AmfReader(c)) { d = reader.ReadAmf3Object(); } } else { using (AmfReader reader = new AmfReader(stream)) { d = reader.ReadAmf3Object(); } } } return d; }

    Read the article

  • XmlSchema.read throws exception when an element is declared nillable

    - by G33kKahuna
    I have a simple schema that I am trying to read using XmlSchema.Read() method. I keep getting The 'nillable' attribute is not supported in this context Here is a simple code in C# XmlSchema schema = null; using (StreamReader reader = new StreamReader(<Path to Schema file name>) { schema = XmlSchema.Read(reader.BaseStream, null); } Below is the schema <xs:schema xmlns:b="http://schemas.microsoft.com/BizTalk/2003" xmlns="http://xyz.com.schema.bc.mySchema" attributeFormDefault="unqualified" elementFormDefault="qualified" targetNamespace="http://xyz.com.schema.bc.mySchema" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="data"> <xs:complexType> <xs:sequence> <xs:element name="Component"> <xs:complexType> <xs:sequence> <xs:element minOccurs="1" maxOccurs="unbounded" name="row"> <xs:complexType> <xs:sequence> <xs:element name="changed_by" type="xs:string" nillable="true" /> <xs:element name="column_name" type="xs:string" nillable="true" /> <xs:element name="comment_text" type="xs:string" nillable="true" /> <xs:element name="is_approved" type="xs:string" nillable="true" /> <xs:element name="log_at" type="xs:dateTime" nillable="true" /> <xs:element name="new_val" type="xs:string" nillable="true" /> <xs:element name="old_val" type="xs:string" nillable="true" /> <xs:element name="person_id" type="xs:string" nillable="true" /> <xs:element name="poh_id" type="xs:string" nillable="true" /> <xs:element name="pol_id" type="xs:string" nillable="true" /> <xs:element name="search_name" type="xs:string" nillable="true" /> <xs:element name="unique_id" type="xs:integer" nillable="true" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:complexType> </xs:element> </xs:schema>

    Read the article

  • Problems connecting to WCF Service via NetNamedPipeBinding

    - by John
    I'm having trouble figuring out how to get a named pipe WCF service to work. The service is in a seperate assembly from the executable. The config looks like this: <system.serviceModel> <bindings> <netNamedPipeBinding> <binding name="NoSecurityIPC"> <security mode="None" /> </binding> </netNamedPipeBinding> </bindings> <client> <endpoint name="internal" address="channel1" binding="netNamedPipeBinding" bindingConfiguration="NoSecurityIPC" contract="conplement.TimeService.ICpTimeService" /> </client> <services> <service name="cpTimeService"> <host> <baseAddresses> <add baseAddress="net.pipe://localhost/" /> </baseAddresses> </host> <endpoint address="channel1" binding="netNamedPipeBinding" bindingConfiguration="NoSecurityIPC" contract="conplement.TimeService.ICpTimeService" /> </service> </services> </system.serviceModel> I'm using a ChannelFactory to create a proxy to access the service host: ServiceHost h = new ServiceHost(typeof(TimeService), new Uri("net.pipe://localhost/")); h.AddServiceEndpoint(typeof(ITimeService), new NetNamedPipeBinding("NoSecurityIPC"), "net.pipe://localhost/"); h.Open(); ChannelFactory<ITimeService> factory = new ChannelFactory<ITimeService>("channel1", new EndpointAddress(new Uri("net.pipe://localhost/"))); ICpTimeService proxy = factory.CreateChannel(); using (proxy as IDisposable) { this.ds = proxy.LoadData(); } I'm not sure what I'm doing wrong when I create the ChannelFactory. It can't seem to find the "channel1" in the config. When I create my binding manually and pass it to the ChannelFactory constructor, the factory and the proxy are created but the call to the LoadData() fails (times out). Can anyone see what I'm doing wrong here?

    Read the article

  • iPhone MapKit problems: viewForAnnotation inconsistently setting pinColor?

    - by blackkettle
    Hi, I'm trying setup a map that displays different pin colors depending on the type/class of the location in question. I know this is a pretty common thing to do, but I'm having trouble getting the viewForAnnotation delegate to consistently update/set the pin color. I have a showThisLocation function that basically cycles through a list of AddressAnnotations and then based on the annotation class (bus stop, hospital, etc.) I set an if( myClass == 1){ [defaults setObject:@"1" forKey:@"currPinColor"]; [defaults synchronize]; NSLog(@"Should be %@!", [defaults objectForKey:@"currPinColor"]); } else if( myClass ==2 ){ [defaults setObject:@"2" forKey:@"currPinColor"]; [defaults synchronize]; NSLog(@"Should be %@!", [defaults objectForKey:@"currPinColor"]); } [_mapView addAnnotation:myCurrentAnnotation]; then my viewForAnnotation delegate looks like this, - (MKAnnotationView *) mapView:(MKMapView *)mapView viewForAnnotation:(id )annotation { if( annotation == mapView.userLocation ){ return nil; } NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; MKPinAnnotationView *annView = nil; annView = (MKPinAnnotationView*)[mapView dequeueReusableAnnotationViewWithIdentifier:@"currentloc"]; if( annView == nil ){ annView = [[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:@"currentloc"]; } annView.pinColor = [defaults integerForKey:@"currPinColor"]; NSLog(@"Pin color: %d", [defaults integerForKey:@"currPinColor"]); annView.animatesDrop=TRUE; annView.canShowCallout = YES; annView.calloutOffset = CGPointMake(-5, 5); return annView; } The problem is that, although the NSLog statements in the "if" block always confirm that the color has been set, the delegate sometimes but not always ends up with the correct color. I've also noticed that what generally happens is that the first search for a new location will set all pins to the last color in the "if" block, but search for the same location again will set the pins to the correct color. I suspect I am not supposed to usen NSUserDefaults in this way, but I also tried to create my own subclass for MKAnnotation which included an additional property "currentPinColor", and while this allowed me to set the "currentPinColor", when I tried to access the "currentPinColor from the delegate method, the compiler complained that it didn't know anything about "currentPinColor in connection with MKAnnotation. Fair enough I guess, but then I tried to revise the delegate method, - (MKAnnotationView *) mapView:(MKMapView *)mapView viewForAnnotation:(id )annotation instead of the default - (MKAnnotationView *) mapView:(MKMapView *)mapView viewForAnnotation:(id )annotation at which point the compiler complained that it didn't know anything about the protocol for MyCustomMKAnnotation in this delegate context. What is the proper way to set the delegate method and/or MyCustomMKAnnotation, or what is the appropriate way to achieve consistent pinColor settings. I'm just about out of ideas for things to try here.

    Read the article

  • Why 32-bit color EGL configurations fail with EGL_BAD_MATCH on Moto Droid?

    - by Gilead
    I'm trying to figure out why certain EGL configurations cause eglMakeCurrent() call to return EGL_BAD_MATCH on Motorola Droid running Android 2.1u1. This is a full list of hardware-accelerated EGL configurations (those with EGL_CONFIG_CAVEAT == EGL_NONE) as there's a few others with EGL_CONFIG_CAVEAT == EGL_SLOW_CONFIG but those are backed by PixelFlinger 1.2 meaning they're using software renderer. ID: 0 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 1 RGB: 8, 8, 8 Alpha: 8 Depth: 0 Stencil: 0 // BAD MATCH ID: 2 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 3 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 4 RGB: 8, 8, 8 Alpha: 8 Depth: 0 Stencil: 0 // BAD MATCH ID: 5 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 6 RGB: 5, 6, 5 Alpha: 0 Depth: 24 Stencil: 8 ID: 7 RGB: 5, 6, 5 Alpha: 0 Depth: 0 Stencil: 0 ID: 8 RGB: 5, 6, 5 Alpha: 0 Depth: 24 Stencil: 8 Clearly, all configurations with 32-bit color depth fail and all 16-bit ones are OK but: 1. Why? 2. WHY?! :) 3. How do I tell which ones would fail before actually trying to use them? The code below is as simple as it can get. I put if (v[0] == 6) there to check different configs, normally they're chosen by half-clever config matcher :) private void createSurface(SurfaceHolder holder) { egl = (EGL10)EGLContext.getEGL(); eglDisplay = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); egl.eglInitialize(eglDisplay, null); int[] numConfigs = new int[1]; egl.eglChooseConfig(eglDisplay, new int[] { EGL10.EGL_NONE }, null, 0, numConfigs); EGLConfig[] configs = new EGLConfig[numConfigs[0]]; egl.eglChooseConfig(eglDisplay, new int[] { EGL10.EGL_NONE }, configs, numConfigs[0], numConfigs); int[] v = new int[1]; for (EGLConfig c : configs) { egl.eglGetConfigAttrib(eglDisplay, c, EGL10.EGL_CONFIG_ID, v); if (v[0] == 6) { eglConfig = c; } } eglContext = egl.eglCreateContext(eglDisplay, eglConfig, EGL10.EGL_NO_CONTEXT, null); if (eglContext == null || eglContext == EGL10.EGL_NO_CONTEXT) { throw new RuntimeException("Unable to create EGL context"); } eglSurface = egl.eglCreateWindowSurface(eglDisplay, eglConfig, holder, null); if (eglSurface == null || eglSurface == EGL10.EGL_NO_SURFACE) { throw new RuntimeException("Unable to create EGL surface"); } if (!egl.eglMakeCurrent(eglDisplay, eglSurface, eglSurface, eglContext)) { throw new RuntimeException("Unable to make EGL current"); } gl = (GL10)eglContext.getGL(); }

    Read the article

  • Unable to retrieve information form HP-UX pst_status object

    - by bogertron
    I am attempting to get process information by using the HP-UX C/C++ library. After scouring the internet, I have discovered that HP-UX has the pstat.h header file which allows programmers to retrieve the process information. After attempting to understand code from the internet hp website, I attempted to create a little test sample to comprehend what the code does. I attempted to use example 3, however, I ran into several issues. The first issue came when I attempted to execute the following line of code: (void)printf("pid is %d, command is %s\n", pst[i].pst_pid, pst[i].pst_ucomm); When I attempted to print the string, I hit a memory fault. So I decided to attempt to see what the string is and came up with the following: #include <sys/param.h> #include <sys/pstat.h> #include <sys/unistd.h> #include <string.h> int main(int argc, char** argv) { #define BURST ((size_t)10) struct pst_status pst[BURST]; int i, count; int idx = 0; /* index within the context */ int index = 0; /* loop until count == 0, will occur all have been returned */ while ((count=pstat_getproc(pst, sizeof(pst[0]),BURST,idx))>0) { index = 0; printf("index: %d", index); /* got count (max of BURST) this time. process them */ while (pst[i].pst_ucomm[index] != '\0') { printf("%c", pst[i].pst_ucomm[index]); index++; } printf("\n"); for (i = 0; i < count; i++) { printf("pid is %d, command is \n", pst[i].pst_pid); } /* * now go back and do it again, using the next index after * the current 'burst' */ idx = pst[count-1].pst_idx + 1; } if (count == -1) perror("pstat_getproc()"); #undef BURST } Unfortunately, what happens is that I get the first process printed, then pid is 2, command is pid is 2, command is pid is 2, command is... I know that I must be doing something foolish since my C/C++ skills are not that great, but I cannot figure out what the issue is since the code is largely copied from the hp website. So here's the question(s) for clarity: 1. Why can't printf("%s", pst[i].pst_ucomm); handle strings? 2. Why can't I iterate over the processes in the system? Any help is greatly appreciated.

    Read the article

< Previous Page | 749 750 751 752 753 754 755 756 757 758 759 760  | Next Page >