Search Results

Search found 6561 results on 263 pages for 'developing'.

Page 238/263 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • VB.NET Program Locks Up with Internet Explorer Opened

    - by aaronsj
    I'm using Visual Studio 2008 and developing a VB.NET application. I'm having strange lockup problems with my program, but only when Internet explorer 8 is opened. When I cover my form with another window and then uncover it, I find that it has locked up. My program has no references to IE and the only thing it even has to do with IE is using Process.Start with a web address. My program works fine and exactly as it should, but only when IE is not opened. Does anyone know why a program would lock up only while IE is running? Edit: I've done some digging and I've found the offending thread in my program. I don't know what starts this thread or what it does, but when I kill it, my program no longer freezes. The thread is one of the CreateApplicationContext threads, here is the last few items in the stack trace of that thread. 6 ntkrnlpa.exe+0x897bc 7 ntdll.dll!KiFastSystemCallRet 8 mscorwrks.dll!LogHelp_TerminateOnAssert+0x61 9 mscorwrks.dll!DllUnregisterServerInternal+0x10523 10 mscorwrks.dll!DllUnregisterServerInternal+0x10542 11 mscorwrks.dll!StrongNameErrorInfo+0x34387 12 mscorwrks.dll!StrongNameErrorInfo+0x34815 13 mscorwrks.dll!CreateApplicationContext+0xbc35 14 KERNEL32.dll!GetModuleHandleA+0xdf Process explorer says my program is using no CPU nor throwing any exceptions while it is hung.

    Read the article

  • Performance issue between builds

    - by DeadMG
    I've been developing a small indie game in my spare time and have run across an inexplicable issue. Some builds of the game will randomly run several hundred frames per second slower than other builds. For example, when rendering some text and no 3D scene, I can achieve 1800FPS on my own hardware. Add one 3D sphere (10k verts, pixel shaded), achieve 1700 FPS. Add two more spheres, achieve 800 FPS. Remove all spheres, achieve 1100FPS- even though the code now renders the same scene as I previously achieved at 1800FPS, which is just the FPS counter being rendered. I've tried rebuilding and cleaning the project and rebooting the compiler. This is in Release mode and I turned on all the optimizations I could find. Any suggestions as to the cause? I ran a quick profile, and Visual Studio seems to think that over 90% of my time was spent in D3D9_43.dll, suggesting that it's not a bug in my app, which doesn't explain why it manifests in only some builds. I rebooted my machine and it's back up to 1800FPS. I think it's a bug in the DirectX SDK tools (amongst many others). Going to delete this question.

    Read the article

  • Mapping a URL to a service inside a class library

    - by johnk82swe
    I'm developing a small content management solution than can be used in any ASP.NET 3.5 website. The website references some dll's and then lets the aspx's inherit my page baseclass. Some configuration in web.config is also needed, but thats it. Now I'm building a standalone Silverlight editor for the CMS. My idea is that it should communicate with the server using web services. But the question is how to make this service available to the editor? I don't want the website developers having to bother with it. If I used a REST API rather than SOAP I could just create an HttpHandler in my class library and let the website developers add a handler to it in web.config with the path "editor" and then the editor could communicate with that handler on mywebsite.com/editor. Is there any way to achieve the same with a asmx or wcf service? The important thing is that the website developers never have to set up any asmx files or anything. They should only have to specify a url and map that url to a service inside my class library. Thanks in advance!

    Read the article

  • Coding style in .NET: whether to refactor into new method or not?

    - by Dione
    Hi As you aware, in .NET code-behind style, we already use a lot of function to accommodate those _Click function, _SelectedIndexChanged function etc etc. In our team there are some developer that make a function in the middle of .NET function, for example: public void Button_Click(object sender, EventArgs e) {     some logic here..     some logic there..     DoSomething();     DoSomethingThere();     another logic here..     DoOtherSomething(); } private void DoSomething() { } private void DoSomethingThere() { } private void DoOtherSomething() { } public void DropDown_SelectedIndexChanged() { } public void OtherButton_Click() { } and the function listed above is only used once in that function and not used anywhere else in the page, or called from other part of the solution. They said it make the code more tidier by grouping them and extract them into additional sub-function. I can understand if the sub-function is use over and over again in the code, but if it is only use once, then I think it is not really a good idea to extract them into sub-function, as the code getting bigger and bigger, when you look into the page and trying to understand the logic or to debug by skimming through line by line, it will make you confused by jumping from main function to the sub-function then to main function and to sub-function again. I know this kind of grouping by method is better when you writing old ASP or Cold fusion style, but I am not sure if this kind of style is better for .NET or not. Question is: which is better when you developing .NET, is grouping similar logic into a sub-method better (although they only use once), or just put them together inside main function and add //explanation here on the start of the logic is better? Hope my question is clear enough. Thanks.

    Read the article

  • ASP.NET javascript embed in template column

    - by Mahesh
    Hi, I am developing a web page in which a rad grid displays the list of exams. I included a template column which shows count down timer when the exam is going to expire. Code is as given below: <telerik:RadGrid ID="radGrid" runat="server" AutoGenerateColumns="false"> <MasterTableView> <Columns> <telerik:GridTemplateColumn HeaderText="template" DataField="Date"> <ItemTemplate> <script language="JavaScript" type="text/javascript"> TargetDate = '<%# Eval("Date") %>'; BackColor = "white"; ForeColor = "black"; CountActive = true; CountStepper = -1; LeadingZero = true; DisplayFormat = "%%D%% Days, %%H%% Hours, %%M%% Minutes, %%S%% Seconds."; FinishMessage = "It is finally here!"; </script> <script language="JavaScript" src="http://scripts.hashemian.com/js/countdown.js" type="text/javascript"></script> </ItemTemplate> </telerik:GridTemplateColumn> </Columns> </MasterTableView> </telerik:RadGrid> I am giving DataTable as datasource to this grid. But my problem is , the template column is showing data only for the first record and the value taken is from the last row in the DataTable. For Ex: If I give data as given below, I can see 3 records but with only the first record displaying the counter with last value(10/10/2010 05:43 PM). 02/02/2011 01:00 AM 08/09/2010 11:00 PM 10/10/2010 05:43 PM Could you please help in this?? Thanks, Mahesh

    Read the article

  • Make an abstract class or use a processor?

    - by Tim Murphy
    I'm developing a class to compare two directories and run an action when a directory/file in the source directory does not exist in destination directory. Here is a prototype of the class: public abstract class IdenticalDirectories { private DirectoryInfo _sourceDirectory; private DirectoryInfo _destinationDirectory; protected abstract void DirectoryExists(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void DirectoryDoesNotExist(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void FileExists(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void FileDoesNotExist(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); public IdenticalDirectories(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory) { ... } public void Run() { foreach (DirectoryInfo sourceSubDirectory in _sourceDirectory.GetDirectories()) { DirectoryInfo destinationSubDirectory = this.GetDestinationDirectoryInfo(subDirectory); if (destinationSubDirectory.Exists()) { this.DirectoryExists(sourceSubDirectory, destinationSubDirectory); } else { this.DirectoryDoesNotExist(sourceSubDirectory, destinationSubDirectory); } foreach (FileInfo sourceFile in sourceSubDirectory.GetFiles()) { FileInfo destinationFile = this.GetDestinationFileInfo(sourceFile); if (destinationFile.Exists()) { this.FileExists(sourceFile, destinationFile); } else { this.FileDoesNotExist(sourceFile, destinationFile); } } } } } The above prototype is an abstract class. I'm wondering if it would be better to make the class non-abstract and have the Run method receiver a processor? eg. public void Run(IIdenticalDirectoriesProcessor processor) { foreach (DirectoryInfo sourceSubDirectory in _sourceDirectory.GetDirectories()) { DirectoryInfo destinationSubDirectory = this.GetDestinationDirectoryInfo(subDirectory); if (destinationSubDirectory.Exists()) { processor.DirectoryExists(sourceSubDirectory, destinationSubDirectory); } else { processor.DirectoryDoesNotExist(sourceSubDirectory, destinationSubDirectory); } foreach (FileInfo sourceFile in sourceSubDirectory.GetFiles()) { FileInfo destinationFile = this.GetDestinationFileInfo(sourceFile); if (destinationFile.Exists()) { processor.FileExists(sourceFile, destinationFile); } else { processor.FileDoesNotExist(sourceFile, destinationFile); } } } } What do you see as the pros and cons of each implementation?

    Read the article

  • session_set_cookie_params on multi-domain sites

    - by nillls
    Hi! I'm currently developing for an application (www.domain.se, .eu) where we're experiencing problems with sessions not propagating across domains. Internet Explorer is the root cause of this, as it will differentiate sessions depending on whether we're typing in "domain.se" or "www.domain.se". Due to some unfortunate redirecting, we're not able to keep the user on the same address the user typed in, instead we're always redirecting to www.domain.se on login. Needless to say, IE users can not login when typing "domain.se". To make this error go away, we implemented a function to try and set the session to be valid across all possible domains by doing the following: if($_SERVER['HTTP_HOST'] == "domain.se") { session_set_cookie_params(3600, '/', '.domain.se', true); } There are basically a few if:s that we go through depending on what address the user typed in, but the third argument stays the same. This, however, results in no-one being able to log in, regardless of domain. I've tried reading up on how session_set_cookie_params() works but to no avail. Any help is greatly appreciated!

    Read the article

  • Strategies for testing reactive, asynchronous code

    - by Arne
    I am developing a data-flow oriented domain-specific language. To simplify, let's just look at Operations. Operations have a number of named parameters and can be asked to compute their result using their current state. To decide when an Operation should produce a result, it gets a Decision that is sensitive to which parameter got a value from who. When this Decision decides that it is fulfilled, it emits a Signal using an Observer. An Accessor listens for this Signal and in turn calls the Result method of the Operation in order to multiplex it to the parameters of other Operations. So far, so good, nicely decoupled design, composable and reusable and, depending on the specific Observer used, as asynchronous as you want it to be. Now here's my problem: I would love to start coding actual Tests against this design. But with an asynchronous Observer... how should I know that the whole signal-and-parameters-plumbing worked? Do I need to use time outs while waiting for a Signal in order to say that it was emitted successfully or not? How can I be, formally, sure that the Signal will not be emitted if I just wait a little longer (halting problem? ;-)) And, how can I be sure that the Signal was emitted because it was me who set a parameter, and not another Operation? It might well be that my test comes to early and sees a Signal that was emitted way before my setting a parameter caused a Decision to emit it. Currently, I guess the trivial cases are easy to test, but as soon as I want to test complex many-to-many - situations between operations I must resort to hoping that the design Just Works (tm)...

    Read the article

  • How can I have a single helper work on different models passed to it?

    - by Angela
    I am probably going to need to refactor in two steps since I'm still developing the project and learning the use-cases as I go along since it is to scratch my own itch. I have three models: Letters, Calls, Emails. They have some similarilty, but I anticipate they also will have some different attributes as you can tell from their description. Ideally I could refactor them as Events, with a type as Letters, Calls, Emails, but didn't know how to extend subclasses. My immediate need is this: I have a helper which checks on the status of whether an email (for example) was sent to a specific contact: def show_email_status(contact, email) @contact_email = ContactEmail.find(:first, :conditions => {:contact_id => contact.id, :email_id => email.id }) if ! @contact_email.nil? return @contact_email.status end end I realized that I, of course, want to know the status for whether a call was made to a contact as well, so I wrote: def show_call_status(contact, call) @contact_call = ContactCall.find(:first, :conditions => {:contact_id => contact.id, :call_id => call.id }) if ! @contact_call.nil? return @contact_call.status end end I would love to be able to just have a single helper show_status where I can say show_status(contact,call) or show_status(contact,email) and it would know whether to look for the object @contact_call or @contact_email. Yes, it would be easier if it were just @contact_event, but I want to do a small refactoring while I get the program up and running, and this would make the ability to do a history for a given contact much easier. Thanks!

    Read the article

  • My HTML5 web app crashes and I have no clue how to debug

    - by Shouvik
    Hi All, I have written a word game using HTML5 canvas tag and a little bit of audio. I developed the application on the chrome web browser on a linux system. Recently during the testing phase it was tried on safari 5.0.3 on Mac and the webpage froze. Not just the canvas element, but interactive element on the page froze. I have at some times experienced this problem on google chrome when I was developing but since the console did not throw any error before this happened, I did not give it much credence. Now as per requirements I am supposed to support both chrome and safari but this dismal performance on safari has left me shocked and I cannot see what error can be thrown which might lead to such a situation. Worse yet the CPU usage on using this application peaks to 70-80percent on my 2yr old macbook running ubuntu... I can only but pity the person who uses mac to operate this app, which undoubtedly is a heavier OS. Could someone help me out with a place I can start with to find out what exactly is causing this issue. I have run profiles on this webapp on google chromes console and noticed that in the heap spanshot value increases steadily with the playing of the game, specifically (root) value which jumps up by 900 counts. Any help would be very appreciated! Thanks EDIT: I don't know if this helps, but I have noticed that even on refreshing the page after the app becomes unresponsive the page reloads and I am still not able to interact with the page elements but the tab scroll bar continues to work and I can see my application window completely. So to summaries the tab stops accepting any sort of user interaction inside the page. Edit2: Nop. It doesn't work still... The app crashes on double click on the canvas element. The console is not throwing any errors either! =/ I have noticed this problem is isolated only to safari!

    Read the article

  • Analyzing BSOD without a dump

    - by eeoo2555
    One of the drivers I'm developing has caused a BSOD. Unfortunately a dump file was not created since it was not configured / low resources. I was trying to reproduce this crash but no luck so far. Is there any way to get some info using WinDbg or any other tool? I have this information: A screenshot of the BSOD The .sys file. Its pdb The source code The machine it was crashed on I have everything except the dump itself. Your help will be much appreciated. As I said above, no dump (/minidump) exists. This is the actual problem. For this specific crash, I know I won't be able to get the stack. Just getting the specific line of code will be good enough. Because the BSOD contains the module's address, it seems like there should be a way to detect which line exactly is it. As I mentioned above, I do have the .sys file, the pdb and the source code. This is the specific code taken from MSDN: SYSTEM_SERVICE_EXCEPTION. How can I know from there what was the specific line? and/or the specific exception raised?

    Read the article

  • Error with `Thread.Sleep` during automatic testing on TeamCity 5

    - by yeyeyerman
    Hello, I'm having some problems executing the tests of the application I'm developing. All the tests execute normally with ReSharper and in NCover. However, the execution of one of these tests in TeamCity is generating an error. This test initializes two objects, the object under test and a simulator of a real object. Both objects will communicate throug a serial link in a representation of the real scenario. ObjectSimulator r_simulator = new ObjectSimulator(...); ObjectDriver r_driver = new ObjectDriver(...); Assert.IsTrue(r_driver.Connect() == ErrorCode.Success); The simulator just do the following in the constructor public class ObjectSimulator { ... public ObjectSimulator() { // serial port configuration m_port = new SerialPort(); m_port.DataReceived += DataReceivedEvent; } ... } The main object has two threads. The main thread of the application and a timer to refresh a watchdog timer in the real object. public ErrorCode Connect() { ... StartSynchroTimer(); Thread.Sleep(4); // to check if the timer is working properly ... } The problem seems to be comming from the Thread.Sleep() call, as when I remove it everything works. The ObjectSimulator somehow doesn't execute the DataReceived event callback. How can I resolve this issue?

    Read the article

  • C++ - Totally suspend windows application

    - by HardCoder1986
    Hello! I am developing a simple WinAPI application and started from writing my own assertion system. I have a macro defined like ASSERT(X) which would make pretty the same thing as assert(X) does, but with more information, more options and etc. At some moment (when that assertion system was already running and working) I realized there is a problem. Suppose I wrote a code that does some action using a timer and (just a simple example) this action is done while handling WM_TIMER message. And now, the situation changes the way that this code starts throwing an assert. This assert message would be shown every TIMER_RESOLUTION milliseconds and would simply flood the screen. Options for solving this situation could be: 1) Totally pause application running (probably also, suspend all threads) when the assertion messagebox is shown and continue running after it is closed 2) Make a static counter for the shown asserts and don't show asserts when one of them is already showing (but this doesn't pause application) 3) Group similiar asserts and show only one for each assert type (but this also doesn't pause application) 4) Modify the application code (for example, Get / Translate / Dispatch message loop) so that it suspends itself when there are any asserts. This is good, but not universal and looks like a hack. To my mind, option number 1 is the best. But I don't know any way how this can be achieved. What I'm seeking for is a way to pause the runtime (something similiar to Pause button in the debugger). Does somebody know how to achieve this? Also, if somebody knows an efficient way to handle this problem - I would appreciate your help. Thank you.

    Read the article

  • Memory warning navigation controller

    - by fbiphone81
    Hi I'm new in iphone developing. The question is : I have a RootViewController with 2 uibutton. When i execute "memory monitor instrument on my iphone device OS3" RealMemory goes from 3.75M to 4.58M (I launch a uiview1controller by click on one UIbutton, then i close it and return to the RootViewController). When i close then uiview1 then memory goes to 4.22M. Why not to 3.75M? (uiview1controller is a simple test. nothing inside.) if i launch again uiview1controller memory increase from 4.22M and then retunrs to 4.22M. Correct. Then i launch uiview2controller from second uibutton. RealMemory goes from 4.22 to 6.01M. When i close the uiview1controller memory goes to 4.73M. Why not to 4.22? uiview1controller is a simple uiview with 3 uilabel and 3 uiimage designed with InterfaceBuilder and . the ONLY code write from me in uiview2controller is declare 3 iboutlet uitextfield and set uitextfileld text in loading. every time i launch the uiview2controller and close it memory increase of 0.51M ? What's worong? i tried to release iboutlet but it's the same. Thank you very much.

    Read the article

  • Tell me again why we need both .NET and Windows? Why can't Windows morph into the CLR?

    - by le dorfier
    The same way DOS morphed into Windows? We seem to have ended up supporting and developing for three platforms from Microsoft, and I'm not sure where the boundaries are supposed to lie. Why can't the benefits of the CLR (such as type safety, memory protection, etc.) be built into Windows itself? Or into the browser? Why an entirely other virtual machine? (How may levels of virtual machine indirection are we dealing with now? We just added Silverlight - and before that Flash - running inside the Browser running inside maybe a VM install...) I can see raw Windows for servers, but why couldn't there be a CLR for workstations talking directly to the hardware (or at least not the whole Windows legacy ball and chain)? (ooppp - I've got two questions here. Let's make this - why can't .net be built into Windows? I understand about backward compatibility - but the safety of what's in .NET could be at least optionally in Windows itself, couldn't it? It would just be yet another of many sets of APIs?) Factoid - I recall that one of the competitor architectures selling against MS-DOS on the IBM PC was UCSD-pascal runtime - a VM.

    Read the article

  • Best method to cache objects in PHP?

    - by Martin Bean
    Hi, I'm currently developing a large site that handles user registrations. A social networking website for argument's sake. However, I've noticed a lag in page loads and deciphered that it is the creation of objects on pages that's slowing things down. For example, I have a Member object, that when instantiated with an ID passed as a construct parameter, it queries the database for that members' row in the members database table. Not bad, but this is created each time a page is loaded; and called more than once when say, calling an array of that particular members' friends, as a new Member object is created for each friend. So on a single page I can have upwards of seven of the same object, but containing different properties. What I'm wanting to do is to find a way to reduce the database load, and to allow persist objects between page loads. For example, the logged in user's object to be created on login (which I can do) but then stored somewhere for retrieval so I don't have to keep re-creating the object between page loads. What is the best solution for this? I've had a look at Memcache, but with it being a third-party module I can't have the web host install it on this occasion. What are my alternatives, and/or best practices in my case? Thanks in advance.

    Read the article

  • How to learn a new programming language? And how to choose the appropriate language?

    - by Sebi
    Unfortunately my last question was closed, so I reformulated the question: I know this question was here a lot of times and can't be answered at all, but im not looking for a single name, but rather for an advice in my situation. I learned programming with Java and now I'm developing in Java for more or less 5 years (at the university) and I thinks my programming skills their are really ok/average. I have also small experience in C/C++ and C#. Now I have some spare time and I'd like to learn a new language or deepen the knowledge of Java/C/C++. But how to choose the right language to learn? I'd like to learn a language which will be usefull in the future concerning working in a software development business? I know there is no single answer, but I'm sure you could mention some languages that are more usefull than others. And what's the best way to learn such a language efficiently (bearing in mind that one has already learned some other languages)? Just doing tutorials? Or trying to implement a project? Trying to move a Java project to the new language?

    Read the article

  • Looking for something to add some standard rules for my c++ project.

    - by rkb
    Hello all, My team is developing a C++ project on linux. We use vim as editor. I want to enforce some code standard rules in our team in such a way that if the code is not in accordance with it, some sort of warning or error will be thrown when it builds or compiles. Not necessarily it builds but at least I can run some plugin or tools on that code to make sure it meets the standard. So that before committing to svn everyone need to run the code through some sort of plugin or script and make sure it meets the requirement and then only he/she can commit. Not sure if we can add some rules to vim, if there are any let me know about it. For eg. In our code standards all the member variables and private functions should start with _ class A{ private: int _count; float _amount; void _increment_count(){ ++_count; } } So I want to throw some warning or error or some sort of messages for this class if the variables are declared as follows. class A{ private: int count; float amount; void increment_count(){ ++_count; } } Please note that warning and error are not from compiler becoz program is still valid. Its from the tool I want to use so that code goes to re-factoring but still works fine on the executable side. I am looking for some sort of plugin or pre parsers or scripts which will help me in achieving all this. Currently we use svn; just to anser the comment.

    Read the article

  • Footprint of Lua on a PPC Micro

    - by Adam Shiemke
    We're developing some code on Freescale PPC micros (5517 and 5668 at the moment), and I was wondering if we could put Lua on them. The devices need to be easily programmed/reconfigured in the field, and the current product uses a proprietary interpreted logic language that can be loaded in, and our software (written in C) runs an interpreter. I would like to move to a better language (the implementation is a bit buggy and slow), so I'm considering Lua, but the memory footprint must be very low. For the 5517 (which we may not use), the maximum RAM is 80K. Things are better on the 5668, with 592K of RAM. So does anyone know if I can put Lua on bare metal? We're effectively not running an OS. If so, are there any estimates on what kind of memory footprint we might see? How much effort it would take? Failing this, does anyone know of any kind of interpreter that might be better in a memory-constrained environment without an OS? Or are we better just rolling our own?

    Read the article

  • determine week when pick a day

    - by derikluv
    I'm trying to accomplish the following task. I'm developing a custom calendar with three views (day,week and month), there may be something out there already but I'm rewriting this as part of learning tool for me as well. So user will face with Day view when they first visit, with arrows to go back and forth to next day or previous day of course. If they click on Week View, it will give them a 7 days overview with today date as default, and once again they can go back and forth to next week or previous week. The last view is the full month calendar, once they click on the day, it will give them the detail of the day and at the same time reset the default as the day they pick. So if they go back to week view, they will see the detail for the week contain the day that they picked. This is where I have trouble wrapping my head around with, I know there are PHP functions that determine the day of the week but I can't seem to think about how to pass in the date and get the full week starting from Sunday for the day that passed in. For example, if I passed in 10/12/2012, I'd like to start the week at 10/07/12 - 10/13/12. Thank you kindly for your help or pointing to the right direction. Please excuse my grammar/spelling mistakes as well.

    Read the article

  • Checking if a function has C-linkage at compile-time [unsolvable]

    - by scjohnno
    Is there any way to check if a given function is declared with C-linkage (that is, with extern "C") at compile-time? I am developing a plugin system. Each plugin can supply factory functions to the plugin-loading code. However, this has to be done via name (and subsequent use of GetProcAddress or dlsym). This requires that the functions be declared with C-linkage so as to prevent name-mangling. It would be nice to be able to throw a compiler error if the referred-to function is declared with C++-linkage (as opposed to finding out at runtime when a function with that name does not exist). Here's a simplified example of what I mean: extern "C" void my_func() { } void my_other_func() { } // Replace this struct with one that actually works template<typename T> struct is_c_linkage { static const bool value = true; }; template<typename T> void assertCLinkage(T *func) { static_assert(is_c_linkage<T>::value, "Supplied function does not have C-linkage"); } int main() { assertCLinkage(my_func); // Should compile assertCLinkage(my_other_func); // Should NOT compile } Is there a possible implementation of is_c_linkage that would throw a compiler error for the second function, but not the first? I'm not sure that it's possible (though it may exist as a compiler extension, which I'd still like to know of). Thanks.

    Read the article

  • Is it wrong for a context (right click) menu be the only way a user can perform a certain task?

    - by Eric
    I'd like to know if it ever makes sense to provide some functionality in a piece of software that is only available to the user through a context (right click) menu. It seems that in most software I've worked with the right click menu is always used as a quick way to get to features that are otherwise available from other buttons or menus. Below is a screen shot of the UI I'm developing. The tree view on the right shows the user's library of catalogs. Users can create new catalogs, or add and remove existing catalogs to and from their library. Catalogs in their library can then be opened or closed, or set to read-only. The screen shot shows the context menu I've created for the browser. Some commands can be executed independently from any specific catalog (New, Add). Yet the other commands must be applied to a specifically selected catalog (Close, Open, Remove, ReadOnly, Refresh, Clean UP, Rename). Currently the "Catalog" menu at the top of the window looks identical to this context menu. Yet I think this may be confusing to the users as the tree view which shows the currently selected catalog may not always be visible. The user may have switched to the Search or Filters tab, or the left pane may be hidden entirely. However, I'm hesitant to change the UI so that the commands that depends on a specifically selected catalog are only available through the context menu.

    Read the article

  • Problem virtualizing Ubuntu 10.04 32 bit on VirtualBox 3.1 on Windows Vista 64 bit

    - by Adam Siddhi
    Software & Hardware Setup Host System : Windows Vista Home Premium SP1 64 bit Guest : Ubuntu 10.04 (ubuntu-10.04-desktop-i386.iso) 32 bit VM : VirtualBox 3.1.8 Hardware : Intel Core 2 Duo T6400 4GB SDRAM What Happened I followed the tutorial called Installing Ubuntu inside Windows using VirtualBox located here: www.psychocats.net/ubuntu/virtualbox At first I downloaded ubuntu-10.04-desktop-amd64.iso because I figured that it would be a perfect fit with my Vista 64 OS. I was wrong because it turns out the my Intel Core 2 Duo T6400 CPU does not have Intel® Virtualization Technology. So I had to go with the ubuntu-10.04-desktop-i386.iso which is 32 bit. This got me to the point where I could actually create the Ubuntu VM. So I set up the VM in VirtualBox (according to the tutorial I was following) to prepare for the Ubuntu 10.04 virtualization. Please go to my Picassa web album to see the screen shots of my VM settings and Ubuntu boot process so you can see what I experienced (they appear in the order that I experienced them in). www.picasaweb.google.com/rubysiddhi/ProblemVirtualizingUbuntu100432BitOnVirtualBox31OnWindowsVista64# The first 17 images show the VM settings. The last 8 show my attempt at virtualizing Ubuntu 10.04. You can see booting up but ultimately failing. The Specifics The one error message I got was: (process:210): GLib-WARNING **: getpwuid_r(): failed due to unknown user id (0) It appeared on a black screen that sort of looked like a Windows console screen but with out the c:\ or the ability to type. Then this error message got more complex when tons of text appeared in the screen. Pictures 23 - 25 in the album show this text. I should also mention that I found this post in the Ubuntu forums by zonination who seemed to have similar problems to mine even though they had a different set up. The main issue I think zonination and me may be having is the fact that we can not change the color mode to 32 bit while it is booting. I think the 16 bit color mode maybe making Ubuntu fail. Not certain though. Well I hope I explained my problem thoroughly and clearly. Thanks for the tutorial. It got me started but, now I hope to finish this process so I can start developing in Ubuntu. OH by the way if you want to actually see what happened play by play (with some classical in the background) check out the video I made over here: http://www.youtube.com/watch?v=XMbbm5E_0Xw Thanks! Regards, Adam

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >