Search Results

Search found 15087 results on 604 pages for 'native mode'.

Page 519/604 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • How to hide the console of batch scripts without losing std err/out streams

    - by cooper.thompson
    My question is similar to Running a CMD or BAT in silent mode, but with one additional constraint. If you use WshScript.Run in vbscript, you lose access to the standard in/error/out streams of the process. WshScript.Exec gives you access to the standard streams, but you can't hide your windows. How can you have your cake (hide the windows) and eat it too (have direct access to the console streams)? I'm currently thinking about a C++ executable which creates a new Windows Station and Desktop, (see MSDN) and runs a specified script within that new Desktop (I'm not yet an expert on Window Stations and Desktops, so this idea may be retarded). This idea is based loosely on Condor's USE_VISIBLE_DESKTOP feature, which, if disabled, runs Condor jobs in a non-visible Desktop. I haven't quite figured out if this requires elevated priveledge. The tradeoff of this approach is that your script can disappear into limbo if it blocks on user input. Does anyone have any additional ideas? Or feedback on the approach outlined above? Edit: Also, the purpose of our script is to set up the user environment, so running as another user, or as a system scheduled task isn't really an option (unless there are clever tricks I don't know about).

    Read the article

  • Silverlight 3: ListBox DataTemplate HorizontalAlignment

    - by Lee
    I have a ListBox with it's ItemTemplate bound to a DataTemplate. My problem is I cannot get the elements in the template to stretch to the full width of the ListBox. <ListBox x:Name="listPeople" VerticalAlignment="Stretch" HorizontalAlignment="Stretch" Margin="0,0,0,0" Background="{x:Null}" SelectionMode="Extended" Grid.Row="1" ItemTemplate="{StaticResource PersonViewModel.BrowserDataTemplate}" ItemsSource="{Binding Mode=OneWay, Path=SearchResults}" > </ListBox> <DataTemplate x:Key="PersonViewModel.BrowserDataTemplate"> <ListBoxItem HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Margin="5,5,5,5"> <Border Opacity=".1" x:Name="itemBorder" Background="#FF000000" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" CornerRadius="5,5,5,5" MinWidth="100" Height="50"/> </Grid> </ListBoxItem> </DataTemplate> As you can see, I have added a border within the grid to indicate the width of the template. My goal is to see this border expand to the full width of the listbox. Currently its width is determined by its contents or MinWidth, which is the only thing at the moment keeping it visible at all.

    Read the article

  • relative url in wcf service binding

    - by Jeremy
    I have a silverlight control which has a reference to a silverlight enabled wcf service. When I add a reference to the service in my silverlight control, it adds the following to my clientconfig file: <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_DataAccess" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost:3097/MyApp/DataAccess.svc" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_DataAccess" contract="svcMyService.DataAccess" name="BasicHttpBinding_DataAccess" /> </client> </system.serviceModel> </configuration> How do I specify a relative url in the endpoint address instead of the absolute url? I want it to work no matter where I deploy the web app to without having to edit the clientconfig file, because the silverlight component and the web app will always be deployed together. I thought I'd be able to specify just "DataAccess.svc" but it doesn't seem to like that.

    Read the article

  • Oracle ODBC x64 - getting 0 when selecting a number(9) column

    - by MatsL
    I'm having a really weird problem with a third party web service that uses an ODBC connection to Oracle 10.2.0.3.0. I've written a .NET client that generates the same SQL as the web service so I can find out what's going on. The web service is hosted by IIS 6 that's in x64 mode so we use Oracle x64 client. The oracle client version is 10.2.0.1.0. I have a table that looks like this (I've removed some columns and names): SQL> describe tablename; Name Null? Type ----------------------------------------- -------- ---------------------------- KOD VARCHAR2(30) ORDNING NUMBER(5) AVGIFT NUMBER(9) I then in SQL*Plus issue the following statement: SELECT KOD as kod, AVGIFT as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING And I get the following result: KOD RISKPOANG ------------------------------ ---------- Hög risk 55 Mellan risk 35 Låg risk 15 Mycket låg risk 5 But when I execute the exact same SQL using the same DSN on the same machine I get this: Values Kod: Hög risk RiskPoäng: 0 Kod: Mellan risk RiskPoäng: 0 Kod: Låg risk RiskPoäng: 0 Kod: Mycket låg risk RiskPoäng: 0 If I first cast the number to varchar and then back again to number, like this: SELECT KOD as kod, to_number(to_char(AVGIFT, '99'), '9999999999') as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING I get the correct result: Values Kod: Hög risk RiskPoäng: 55 Kod: Mellan risk RiskPoäng: 35 Kod: Låg risk RiskPoäng: 15 Kod: Mycket låg risk RiskPoäng: 5 Has anyone else experiences this? It's incredibly annoying and I'm completely stuck and not sure what to do next. We have a third party web service that use these tables so I must get the original SQL-statement to work since I can't modify its code. And pointers are greatly appreciated! :-) Best regards, Mats

    Read the article

  • Visual Studio Unit Test failure to start

    - by swmi
    Hi, I am having an issue when starting the tests under debug mode in Visual Studio 2008 Team Test where it gives the following error: "Failed to queue test run '{user@machinename}': Object reference not set to an instance of an object." I googled for the error but no joy. Don't even understand what it means as it is too brief. Has anyone come across this? Note that I can run tests fine if I am not debugging and I get the same error irrespective of the test I run. Thank you, Swati ETA: Being new to Visual Studio Team Test, I didn't know there was a better exception log then what I was seeing. Anyhow, here it is: <Exception> System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.VisualStudio.TestTools.TestCaseManagement.QualityToolsPackage. ShowToolWindow [T](T&amp; toolWindow, String errorMessage, Boolean show) at Microsoft.VisualStudio.TestTools.TestCaseManagement.QualityToolsPackage. OpenTestResultsToolWindow() at Microsoft.VisualStudio.TestTools.TestCaseManagement.SolutionIntegrationManager. DebugTarget(DebugInfo debugInfo, Boolean prepareEnvironment) at Microsoft.VisualStudio.TestTools.TestManagement.DebugProcessLauncher.Launch( String exeFileName, String args, String workingDir, EventHandler processExitedHandler, Process&amp; process) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.StartProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.RestartProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.PrepareProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy. InitializeController(TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.ControllerProxy.QueueTestRunWorker( Object state) </Exception>

    Read the article

  • Why would ASP.NET MVC use session state?

    - by ray247
    Recommended by the ASP.NET team to use cache instead of session, we stopped using session from working with the WebForm model the last few years. So we normally have the session turned off in the web.config <sessionState mode="Off" /> But, now when I'm testing out a ASP.NET MVC application with this setting it throw an error in class SessionStateTempDataProvider inside the mvc framework, it asked me to turn on session state, I did and it worked. Looking at the source it uses session Dictionary<string, object> tempDataDictionary = httpContext.Session[TempDataSessionStateKey] as Dictionary<string, object>; // line 20 in SessionStateTempDataProvider.cs So, why would they use session here? What am I missing? Thanks, Ray. ======================================================== Edit Sorry didn't mean for this post to debate on session vs. cache, but rather in the context of the ASP.NET MVC, I was just wondering why session is used here. In this Scott Watermasysk blog post he mentioned on turning off session too as a good practice, so I'm just wondering do I have to turn it on to use MVC from here on?

    Read the article

  • Binding a Value from a View-Model to the View-Model of a child User Control in Silverlight?

    - by andrej351
    Hi there, So i have a UserControl for one of my Views and have another 'child' UserControl inside that. The outer 'parent' UserControl has a Collection on its View-Model and a Grid control on it to display a list of Items. I want to place another UserControl inside this UserControl to display a form representing the details of one Item. The outer / parent UserControl's View-Model already has a property on it to hold the currently selected Item and i would like to bind this to a DependancyProperty on the inner / child UserControl. I would then like to bind that DependancyProperty to a property on the child UserControl's View-Model. I can then set the DependancyProperty once in XAML with a binding expression and have the child UserControl do all its work in its View-Model like it should. The code i have looks like this.. Parent UserControl: <UserControl x:Class="ItemsListView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel}"> <!-- Grid Control here... --> <ItemDetailsView Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemsListViewModel.SelectedItem}" /> </UserControl> Child UserControl: <UserControl x:Class="ItemDetailsView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" DataContext="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel}" ItemDetailsView.Item="{Binding Source={StaticResource ServiceLocator}, Path=ItemDetailsViewModel.Item, Mode=TwoWay}"> <!-- Form controls here... --> </UserControl> The selected Item is bound to the DependancyProperty fine. However from the DependancyProperty to the child View-Model does not. I've used this sort of apporach in a WPF app without problems. It appears to be a situation where there are two concurrent bindings which need to work but with the same target for two sources. Why won't the second (in the child UserControl) binding work?? Is there a way to acheive the behaviour I'm after?? Cheers.

    Read the article

  • Class unloading in java

    - by java_geek
    When a classloader is garbage collected, are the classes loaded by it unloaded? When the JVM is running is verbose mode, all the loaded classes are o/p. Similarly will the JVM log when it unloads a class? I wrote a custom class loader to test this, but could not see any verbose log for unloading of the classes. CustomClassLoader loader = new CustomClassLoader(new URL[]{}, CustomClassLoader.class.getClassLoader()); loader.addURL("D:\workspace\ClassLoaderTest\implementation.jar"); Class c = null; try { c = Class.forName("Horse",false,loader); if (c != null) { try { Animal animal = (Animal)c.newInstance(); animal.eat(); } catch(Exception ex) { ex.printStackTrace(); } } } catch(Exception e) { e.printStackTrace(); } loader = null; byte[] b = new byte[58*1024*1024]; System.gc(); ClassLoadingMXBean clBean = ManagementFactory.getClassLoadingMXBean(); System.out.println("Number of classes currently loaded " + clBean.getLoadedClassCount()); System.out.println("Number of classes loaded totally " + clBean.getTotalLoadedClassCount()); System.out.println("Number of classes unloaded " + clBean.getUnloadedClassCount()); Even the ClassLoadingMXBean gives number of unloaded classes as 0. How can i know that a class is unloaded when the class loader is GCed?

    Read the article

  • Client calls .asmx, Server exposes WCF endpoint

    - by Lucas B
    We have a client that has been configured to connect to an asmx service. We don't want to ask our customers to update their configuration, but we would like to upgrade our service to use WCF. Does anyone know if WCF supports this? If so, what would the configuration file look like? Our asmx service looks like this: <bindings> <binding name="ATransactionSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://.../atransaction.asmx" binding="basicHttpBinding" bindingConfiguration="ATransactionSoap" contract="ATransactionSoap" name="ATransactionSoap" />

    Read the article

  • Referencing not-yet-defined variables - Java

    - by user2537337
    Because I'm tired of solving math problems, I decided to try something more engaging with my very rusty (and even without the rust, very basic) Java skills. I landed on a super-simple people simulator, and thus far have been having a grand time working through the various steps of getting it to function. Currently, it generates an array of people-class objects and runs a for loop to cycle through a set of actions that alter the relationships between them, which I have stored in a 2d integer array. When it ends, I go look at how much they all hate each other. Fun stuff. Trouble has arisen, however, because I would like the program to clearly print what action is happening when it happens. I thought the best way to do this would be to add a string, description, to my "action" class (which stores variables for the actor, reactor, and the amount the relationship changes). This works to a degree, in that I can print a generic message ("A fight has occurred!") with no problem. However, ideally I would like it to be a little more specific ("Person A has thrown a rock at Person B's head!"). This latter goal is proving more difficult: attempting to construct an action with a description string that references actor and reactor gets me a big old error, "Cannot reference field before it is defined." Which makes perfect sense. I believe I'm not quite in programmer mode, because the only other way I can think to do this is an unwieldy switch statement that negates the need for each action to have its own nicely-packaged description. And there must be a neater way. I am not looking for examples of code, only a push in the direction of the right concept to handle this.

    Read the article

  • JDBC call not executing

    - by dbyrne
    I am working on one of the DAOs for a medium sized web application. Unfortunately, it contains very convoluted logic, and makes hundreds of JDBC stored proc calls in loops. This is out of my control. I am working on a method inside the DAO which makes a single JDBC call. The simplified version of what this method looks like is this: DriverManager.registerDriver(new com.sybase.jdbc2.jdbc.SybDriver()); Connection con = DriverManager.getConnection((String)connectionDetails.get("DATABASE_URL") (String)connectionDetails.get("USERID"), (String)connectionDetails.get("PASSWORD")); String sqlToExecute = "{call " + STORED_PROC + "(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"; CallableStatement stmt = con.prepareCall(sqlToExecute); //Maybe I should try calling clearParameters here? stmt.setString(1,someData); //....Set of parameters.... if (!stmt.execute()) { //execute method never returns false } stmt.close(); Its pretty much a textbook JDBC call. All this stored proc does is insert a single row. Here is where things get crazy: This code works when you run it through a debugger line by line, but fails when you run it "full speed". Not only does it fail, but it doesn't throw any exception! The execute method always returns true. It just breezes right through the JDBC call without inserting a row to the database. If you go through the log files, copy the stored proc call and run it manually, it works (just like it does in debug mode). Whats strange is that the rest of the DAO, with all its hundreds of looped stored proc calls, works fine. My thinking is that Connection or CallableStatement is caching some value behind the scenes that is screwing things up. Has anyone ever seen anything like this before? A JDBC call failing with no exceptions? I know it will be impossible to provide a complete solution to this without seeing the whole application, I am just looking for suggestions on possible issues to investigate.

    Read the article

  • WPF: How to bind and update display with DataContext

    - by Am
    I'm trying to do the following thing: I have a TabControl with several tabs. Each TabControlItem.Content points to PersonDetails which is a UserControl Each BookDetails has a dependency property called IsEditMode I want a control outside of the TabControl , named ToggleEditButton, to be updated whenever the selected tab changes. I thought I could do this by changing the ToggleEditButton data context, by it doesn't seem to work (but I'm new to WPF so I might way off) The code changing the data context: private void tabControl1_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (e.Source is TabControl) { if (e.Source.Equals(tabControl1)) { if (tabControl1.SelectedItem is CloseableTabItem) { var tabItem = tabControl1.SelectedItem as CloseableTabItem; RibbonBook.DataContext = tabItem.Content as BookDetails; ribbonBar.SelectedTabItem = RibbonBook; } } } } The DependencyProperty under BookDetails: public static readonly DependencyProperty IsEditModeProperty = DependencyProperty.Register("IsEditMode", typeof (bool), typeof (BookDetails), new PropertyMetadata(true)); public bool IsEditMode { get { return (bool)GetValue(IsEditModeProperty); } set { SetValue(IsEditModeProperty, value); SetValue(IsViewModeProperty, !value); } } And the relevant XAML: <odc:RibbonTabItem Title="Book" Name="RibbonBook"> <odc:RibbonGroup Title="Details" Image="img/books2.png" IsDialogLauncherVisible="False"> <odc:RibbonToggleButton Content="Edit" Name="ToggleEditButton" odc:RibbonBar.MinSize="Medium" SmallImage="img/edit_16x16.png" LargeImage="img/edit_32x32.png" Click="Book_EditDetails" IsChecked="{Binding Path=IsEditMode, Mode=TwoWay}"/> ... There are two things I want to accomplish, Having the button reflect the IsEditMode for the visible tab, and have the button change the property value with no code behind (if posible) Any help would be greatly appriciated.

    Read the article

  • movie does not start in full screen in flash video player

    - by jodeci
    We have this legacy code of a flash video player that functions well enough but still has some loose ends I need to tighten up. It can do the basic "switch to full screen and back to normal size" stunts, however with one exception. On the first fresh load of the app, if I switch to full screen mode first, and then click to play the movie, the player would be in full screen, yet the movie itself would remain in it's original size. //trigger if (stage.displayState == StageDisplayState.NORMAL) { stage.addEventListener('fullScreen', procFullScreen); stage.scaleMode = StageScaleMode.NO_SCALE; stage.displayState = StageDisplayState.FULL_SCREEN; //mv:VideoDisplay mv.percentHeight = 100; mv.percentWidth = 100; mv.x = 0; mv.y = 0; } // event handler if (event.fullScreen) { mv.smoothing = true; this.height = stage.height; this.width = stage.width; //videoCanvas:Canvas videoCanvas.height = Application.application.height; videoCanvas.width = Application.application.width; fullScreenViewStack.selectedIndex = 1; } The VideoDisplay object even returns the expected width/height, but the movie just plays in it's original size. If I switch screen sizes during movie playback, then the movie size will shrink or stretch as it should. I'm running out of ideas, any suggestions? Thanks in advance!

    Read the article

  • iPhone - adding views and landscape orientation

    - by Franz
    I got a problem with Cocoa touch and landscape orientation. I instantiate my view from a .xib; i added in my view controller - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return interfaceOrientation == UIInterfaceOrientationLandscapeRight; } to only allow landscape orientation. This works find for the first view i add.. if I add a second view however it is rotated again like the landscape view is shown in portrait mode (rotated 90 degree counterclockwise). I really don't know what is going on and can't find a workaround. I even tried to get behind what is happening just adding my view twice: MainMenuViewController* controller = [[MainMenuViewController alloc] initWithNibName:@"MainMenu" bundle:nil]; [window addSubview: controller.view]; MainMenuViewController* controller2 = [[MainMenuViewController alloc] initWithNibName:@"MainMenu" bundle:nil]; [window addSubview: controller2.view]; [window makeKeyAndVisible]; The view of controller is displayed correctly, while the view of controller2 is rotated by 90 degrees. Does anyone have an idea how this can happen? Thanks for your help.

    Read the article

  • UnicodeEncodeError: 'ascii' codec can't encode character [...]

    - by user1461135
    I have read the HOWTO on Unicode from the official docs and a full, very detailed article as well. Still I don't get it why it throws me this error. Here is what I attempt: I open an XML file that contains chars out of ASCII range (but inside allowed XML range). I do that with cfg = codecs.open(filename, encoding='utf-8, mode='r') which runs fine. Looking at the string with repr() also shows me a unicode string. Now I go ahead and read that with parseString(cfg.read().encode('utf-8'). Of course, my XML file starts with this: <?xml version="1.0" encoding="utf-8"?>. Although I suppose it is not relevant, I also defined utf-8 for my python script, but since I am not writing unicode characters directly in it, this should not apply here. Same for the following line: from __future__ import unicode_literals which also is right at the beginning. Next thing I pass the generated Object to my own class where I read tags into variables like this: xmldata.getElementsByTagName(tagName)[0].firstChild.data and assign it to a variable in my class. Now what perfectly works are those commands (obj is an instance of the class): for element in obj: print element And this command does work as well: print obj.__repr__() I defined __iter__() to just yield every variable while __repr__() uses the typical printf stuff: "%s" % self.varname Both commands print perfectly and can output the unicode character. What does not work is this: print obj And now I am stuck because this throws the dreaded UnicodeEncodeError: 'ascii' codec can't encode character u'\xfc' in position 47: So what am I missing? What am I doing wrong? I am looking for a general solution, I always want to handle strings as unicode, just to avoid any possible errors and write a compatible program. Edit: I also defined this: def __str__(self): return self.__repr__() def __unicode__(self): return self.__repr__() From documentation I got that this

    Read the article

  • Trying to create a .NET DLL to be used with Non-.NET Application

    - by Changeling
    I am trying to create a .NET DLL so I can use the cryptographic functions with my non .NET application. I have created a class library so far with this code: namespace AESEncryption { public class EncryptDecrypt { private static readonly byte[] optionalEntropy = { 0x21, 0x05, 0x07, 0x08, 0x27, 0x02, 0x23, 0x36, 0x45, 0x50 }; public interface IEncrypt { string Encrypt(string data, string filePath); }; public class EncryptDecryptInt:IEncrypt { public string Encrypt(string data, string filePath) { byte[] plainKey; try { // Read in the secret key from our cipher key store byte[] cipher = File.ReadAllBytes(filePath); plainKey = ProtectedData.Unprotect(cipher, optionalEntropy, DataProtectionScope.CurrentUser); // Convert our plaintext data into a byte array byte[] plainTextBytes = Encoding.ASCII.GetBytes(data); MemoryStream ms = new MemoryStream(); Rijndael alg = Rijndael.Create(); alg.Mode = CipherMode.CBC; alg.Key = plainKey; alg.IV = optionalEntropy; CryptoStream cs = new CryptoStream(ms, alg.CreateEncryptor(), CryptoStreamMode.Write); cs.Write(plainTextBytes, 0, plainTextBytes.Length); cs.Close(); byte[] encryptedData = ms.ToArray(); return Convert.ToString(encryptedData); } catch (Exception ex) { return ex.Message; } } } } } In my VC++ application, I am using the #import directive to import the TLB file created from the DLL, but the only available functions are _AESEncryption and LIB_AES etc I don't see the interface or the function Encrypt. When I try to instantiate so I can call the functions in my VC++ program, I use this code and get the following error: HRESULT hr = CoInitialize(NULL); IEncryptPtr pIEncrypt(__uuidof(EncryptDecryptInt)); error C2065: 'IEncryptPtr': undeclared identifier error C2146: syntax error : missing ';' before identifier 'pIEncrypt'

    Read the article

  • ASP.NET page with base class with dynamic master page not firing events

    - by Kangkan
    Hi guys! I am feeling that I have terribly wrong somewhere. I was working on a small asp.net app. I have some dynamic themes in the \theme folder and have implemented a page base class to load the master page on the fly. The master is having the ContentPlaceHolder like: <asp:ContentPlaceHolder ID="cphBody" runat="server" /> Now I am adding pages that are derived from my base class and added the form elements. I know, Visual Studio has problem showing the page in the design mode. I have a dropdown box and wish to add the event of onselectedindexchange. But it is not working. the page is like this: <%@ Page Language="C#" AutoEventWireup="true" Inherits="trigon.web.Pages.MIS.JobStatus" Title="Job Status" AspCompat="true" CodeBehind="JobStatus.aspx.cs" %> <asp:Content ID="Content1" ContentPlaceHolderID="cphBody" runat="Server"> <div id="divError" runat="server" /> <asp:DropDownList runat="server" id="jobType" onselectedindexchange="On_jobTypeSelection_Change"></asp:DropDownList> </asp:Content> I have also tried adding the event on the code behind like: protected void Page_Load(object sender, EventArgs e) { jobType.SelectedIndexChanged += new System.EventHandler(this.On_jobTypeSelection_Change); if (!IsPostBack) { JobStatus_DA da = new JobStatus_DA(); jobType.DataSource = da.getJobTypes(); jobType.DataBind(); } } protected void On_jobTypeSelection_Change(Object sender, EventArgs e) { //do something here } Can anybody help? Regards,

    Read the article

  • "One or more breakpoints cannot be set and have been disabled. Execution will stop at the beginning

    - by sam
    I set a breakpoint in my code in Visual-C++, but when I run, I see the error mentioned in the title. I know this question has been asked before on Stack Overflow (http://stackoverflow.com/questions/657470/breakpoints-cannot-be-set-and-have-been-disabled-problem), but none of the answers there fully explained the problem I'm seeing. The closest I can see is something about the linker, but I don't understand that - so if someone could explain in more detail that would be great. In my case, I have 2 projects in Visual C++ - the production dsw, and the test code dsw. I have loaded and rebuilt both dsws in debug mode. I want a breakpoint in the production code, which is run via the test scripts. My issue is I get the error message when I run the test code, because the break point is in the production code, which isn't loaded up when the test starts. Near the beginning of the test script there is a mytest_initialize() command. I imagine this goes off and loads up the production dll. Once this line has executed, I can put the breakpoint in my production code and run until I hit it. But it's quite annoying to have to run to this line, set the breakpoint and continue every time I want to run the test. So I think the problem is Visual C++ doesn't realise the two projects are related. Is this a linker issue? What does the linker do and what settings should I change to make this work? Thanks in advance. Apologies if instead I should be appending this question to the existing one, this is my first post so not quite sure how this should work.

    Read the article

  • SQLDeveloper using over 100MB of PGA+UGA

    - by Leigh Riffel
    Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage: SELECT e.SID, e.username, e.status, b.PGA_MEMORY FROM v$session e LEFT JOIN (select y.SID, y.value pga, TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b ON e.sid=b.sid WHERE (PGA)/1024/1024 > 20 ORDER BY 4 DESC; It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this? Update: I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.

    Read the article

  • Using OpenGL vertex buffers in C++.

    - by Ren
    I've loaded a Wavefront .obj file and drawn it in immediate mode, and it works fine. I'm now trying to draw the same model with a vertex buffer, but I have a question. My model data is organized in the following structures: struct Vert { double x; double y; double z; }; struct Norm { double x; double y; double z; }; struct Texcoord { double u; double v; double w; }; struct Face { unsigned int v[3]; unsigned int n[3]; unsigned int t[3]; }; struct Model { unsigned int vertNumber; unsigned int normNumber; unsigned int texcoordNumber; unsigned int faceNumber; Vert * vertArray; Norm * normArray; Texcoord * texcoordArray; Face * faceArray; }; As it is now, I don't think there is any redundant data, since multiple face structures can point to the same vertex, normal, or texture coordinate. When I make vbo's for the vertex positions, normals, and texture coordinates, and assign data to them with glBufferData, do I have to have make arrays with redundant data so that they will all have the same number of elements in the same order? I'd like to know if there is a simpler way to fill the buffers with the way I already have the model's data organized.

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • Different users get the same cookie - value in .ASPXANONYMOUS

    - by Malcolm Frexner
    My site allows anonymous users. I saw that under heavy load user get sometimes profile values from other users. This happens for anonymous users. I logged the access to profile data: /// <summary> /// /// </summary> /// <param name="controller"></param> /// <returns></returns> public static string ProfileID(this Controller controller ) { if (ApplicationConfiguration.LogProfileAccess) { StringBuilder sb = new StringBuilder(); (from header in controller.Request.Headers.ToPairs() select string.Concat(header.Key, ":", header.Value, ";")).ToList().ForEach(x => sb.Append(x)); string log = string.Format("ip:{0} url:{1} IsAuthenticated:{2} Name:{3} AnonId:{4} header:{5}", controller.Request.UserHostAddress, controller.Request.Url.ToString(), controller.Request.IsAuthenticated, controller.User.Identity.Name, controller.Request.AnonymousID, sb); _log.Debug(log); } return controller.Request.IsAuthenticated ? controller.User.Identity.Name : controller.Request.AnonymousID; } I can see in the log that user realy get the same cookievalue for .ASPXANONYMOUS even if they have different IP. Just to be safe I removed dependency injection for the FormsAuthentication. I dont use OutputCaching. My web.config has this setting for authentication: <anonymousIdentification enabled="true" cookieless="UseCookies" cookieName=".ASPXANONYMOUS" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" /> <authentication mode="Forms"> <forms loginUrl="~/de/Account/Login" /> </authentication> Does anybody have an idea what else I could log or what I should have a look at?

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • Low Level Console Input

    - by Soulseekah
    I'm trying to send commands to to the input of a cmd.exe application using the low level read/write console functions. I have no trouble reading the text (scraping) using the ReadConsole...() and WriteConsole() functions after having attached to the process console, but I've not figured out how to write for example "dir" and have the console interpret it as a sent command. Here's a bit of my code: CreateProcess(NULL, "cmd.exe", NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi); AttachConsole(pi.dwProcessId); strcpy(buffer, "dir"); WriteConsole(GetStdHandle(STD_INPUT_HANDLE), buffer, strlen(buffer), &charRead, NULL); STARTUPINFO attributes of the process are all set to zero, except, of course, the .cb attribute. Nothing changes on the screen, however I'm getting an Error 6: Invalid Handle returned from WriteConsole to STD_INPUT_HANDLE. If I write to (STD_OUTPUT_HANDLE) I do get my dir written on the screen, but nothing of course happens. I'm guessing SetConsoleMode() might be of help, but I've tried many mode combinations, nothing helped. I've also created a quick console application that waits for input (scanf()) and echoes back whatever goes in, didn't work. I've also tried typing into the scanf() promp and then peek into the input buffer using PeekConsoleInput(), returns 0, but the INPUT_RECORD array is empty. I'm aware that there is another way around this using WriteConsoleInput() to directly inject INPUT_RECORD structured events into the console, but this would be way too long, I'll have to send each keypress into it. I hope the question is clear. Please let me know if you need any further information. Thanks for your help.

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >