Search Results

Search found 27684 results on 1108 pages for 'computer management'.

Page 166/1108 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Great computer-science speeches

    - by sub
    I've looked into some questions here where the "best" programming books are listed and then thought why there isn't a question concerning speeches yet. I think that speeches or presentations from developers or even creators of programming languages which were or are heavily used at some point are particulary interesting. One of my favorite speeches was recommended to me by someone here on SO: The future of C# I also like Guido van Rossum's speeches but he sometimes seems pretty nervous. Another in my opinion good presentation would be the Google tech talk about Go. Which (recorded) programming presentations/speeches are worth watching? edit: Made this a community wiki as the answer would probably be a pretty long list.

    Read the article

  • Neural Networks or Human-computer interaction

    - by Shahin
    I will be entering my third year of university in my next academic year, once I've finished my placement year as a web developer, and I would like to hear some opinions on the two modules in the Title. I'm interested in both, however I want to pick one that will be relevant to my career and that I can apply to systems I develop. I'm doing an Internet Computing degree, it covers web development, networking, database work and programming. Though I have had myself set on becoming a web developer I'm not so sure about that any more so am trying not to limit myself to that area of development. I know HCI would help me as a web developer, but do you think it's worth it? Do you think Neural Network knowledge could help me realistically in a system I write in the future? Thanks. EDIT: Hi guys, I thought it would be useful to follow-up with what I decided to do and how it's worked out. I picked Artificial Neural Networks over HCI, and I've really enjoyed it. Having a peek into cognitive science and machine learning has ignited my interest for the subject area, and I will be hoping to take on a postgraduate project a few years from now when I can afford it. I have got a job which I am starting after my final exams (which are in a few days) and I was indeed asked if I had done a module in HCI or similar. It didn't seem to matter, as it isn't a front-end developer position! I would recommend taking the module if you have it as an option, as well as any module consisting of biological computation, it will open up more doors should you want to go onto postgraduate research in the future. Thanks again, Shahin

    Read the article

  • Computer graphics: programatically create duotone (or separations)

    - by TarGz
    There are special kind of images called "duotone" which have just two channels. It is mostly used when you want to achive higher quality reproduction - have a printing press with two colors (black , gray). My question is, I have normal gray-scale image, how to convert it to duotone? I know I can tweak the curves in Photoshop - this is not what I'm asking, rather than how to do it programmatically? Perhaps there is a library which can do just that? What about "dot gain compensation"? "Total ink coverage"?

    Read the article

  • Route WCF ServiceHost to another computer

    - by I2nfo
    GoodDay, I'm not a guru when it comes to WCF, but i do know the basics. My question is, how do i create a ServiceHost on machine X, while the code is on machine Y? if i build and run this code on my dev machine(localhost) : servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(),"net.tcp://my.datacenter.com/MyApp/MyService1"); //This is normally set to localhost. What implementation must be done on the datacenter server, so that if i had to point to http://my.datacenter.com/MyApp/MyService1 , it will route the service operation to my dev machine (localhost). However, the datacenter should not be accessible via the internet. It is a possible infrastructure that we researching to see if we can create a service bus type architecture so that all our customers can invoke other customer services running on their respective machines just by calling our datacenter url. We have looked at Windows Azure, but we have our own datacenter infrasture that we wish to leverage off. Come think of it, we kind of building our own Azure, on a very very basic scale. How does one go creating this? Thanks in Advance

    Read the article

  • Approaches for Error Code/Message Management in .NET

    - by WayneC
    Looking for suggestions/best practices on managing error codes and messages in a multi-tiered applications. Specifically things like: Where should error codes be defined? Enum? Class? How are error messages or further details associated with the error codes? resource files? attributes on enum values, etc.? If you have a multi-tier application consisting of DAL, BLL, UI, and Common projects for example, should there be a single giant list of codes for all tiers, or are the codes extensible by project/tier? Update: Important to mention that I can't rely solely on Exceptions and custom Exception types for error reporting, as some clients for this application will be via web services (SOAP & REST) Any suggestions welcome!

    Read the article

  • Will wear induced by turning computers off in the evening be offset by energy savings?

    - by sharptooth
    I'm asking this here because this is primarily a huge office scenario and administrators will more likely have the answer I'm looking for. Employees' desktop computers can be either left turned on for the whole night or switched off in the evening and turned back on in the morning. The latter will surely save energy. In the same time turning on and off is very harmful for the equipment - hardware often breaks specifically when turned on. Both energy and hardware replacements cost money. With energy it's quite obvious - you pay every month according to what your power meter shows. With hardware replacements it's worse - you need qualified stuff to quickly diagnose the problems and once something breaks the affected employee will have to wait for some time while his computer is fixed/replaced and the data is recovered. So the company has to choose between saving money on energy and saving money on computer maintaince and lost hours. Such decisions must be well though. Is there any detailed study of how turning computers off each evening affects their lifetime?

    Read the article

  • Are there built-in issue tracking and task management in IDEs that integrate into SourceForge, Googl

    - by Kai Sellgren
    Hi, This question may not be exactly programming related, but this is about development. I have been trying to find IDEs that support JIRA/Bugzilla so that I could simply integrate the IDE with SourceForge. I do not like to refresh my browser to see issues, bug reports, security problems, etc. I would like to send issues, resolve issues, right from the IDE. I am currently developing with NetBeans, but I see no ways of integrating into any of the services provided by SourceForge. Am I missing something?

    Read the article

  • Release management with a distributed version control system

    - by See Sharp Cheddar
    We're considering a switch from SVN to a distributed VCS at my workplace. I'm familiar with all the reasons for wanting to using a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process: Discover what changes are available for merging. Run a query to find the defects/tickets associated with these changes. Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch. Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged. Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket. Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again. Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long. I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase. For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' command). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working. DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos. I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.

    Read the article

  • Problem remotely managing Exchange Server 2010

    - by Carlos
    I can't connect to the instance of exchange server 2010 through EMC on the local machine running w2k8 r2. I've checked all the default website bindings, the kerberos auth and WSMan are set to native type in powershell and I still get this error message. Connecting to remote server failed with the following error message: The WS-Management service does not support the request. It was running the command 'Discover-ExchangeServer -UseWIA $true -suppresserror $true'

    Read the article

  • session management: problem displaying username in the header

    - by aeonsleo
    hi, I am working on a simple login and logout module for my website without any security. I am using wamp on a windows xp machine. I am creating session when a user submits the login informaton it redirects to a process.php file which creates the session variables and starts session. Now if the login is successful user is redirected to the welcome page which includes a header file(which displays the header involving signin logout help options) The problem is the header is not changing the signin link to logout as the user logs successfully. The below code is from process.php which initiates a login. $username = $_POST['username']; $password = $_POST['password']; //echo "{$username}:{$password}"; $connection = mysql_connect("localhost","root",""); if(!$connection) { die("Database Connection Failed".mysql_error()); } $db_select = mysql_select_db("tester",$connection); if(!$db_select) { die("Database Selection Failed".mysql_error()); } $result = mysql_query("SELECT * FROM user",$connection); if(!$result) { die("Database Selection Failed".mysql_error()); } $q = "SELECT * FROM user " ."WHERE Name='".$username."' AND Password='".$password. "' "; // Run query $r = mysql_query($q); if ( $obj = @mysql_fetch_object($r) ) { session_start(); // Login good, create session variables $_SESSION["valid_id"] = session_id(); $_SESSION["valid_user"] = $_POST["username"]; $_SESSION["valid_time"] = time(); Header('Location: welcome.php'); The following code is from header.php which is included in welcome.php </div> <div id = "userdetail"> <?php if(isset($_SESSION["valid_user"])) { echo($_SESSION["valid_user"]." " ); echo("<a href=logout.php>Logout</a>"); } else { echo("<a href = login.php>Sign In</a>"); } ?> | Help | Search <input type = "text" name = "searchbox" value = "" /> </div> </div>

    Read the article

  • Computer crashes if left unattended AND running uTorrent, what could be?

    - by DiegoDD
    I have a weird problem in which, if I leave my computer unattended, AND I leave uTorrent open, downloading/seeding, the computer simply crashes after about 20 / 30 minutes (don't know exactly since if I leave it, and come back later, it has already restarted or has a BSOD.) If I leave the computer alone for undefined time WITHOUT uTorrent, nothing happens, and if I am constantly using the computer while using uTorrent, no problem either (I could be using it all day with uT open and it doesn't crash). So what could it be that the combination of those cases makes the computer crash? I have already checked the power management so the computer never enters stand by mode, sleep, hibernate, etc. (the only thing I do is turn off the display). A first guess is that maybe one of my external hard drives DO sleep or enters in a "low power" mode or something if I don't use the PC. but since uT is running MAYBE tries to use that drive, and makes it crash. Could that be possible? How to know for sure if the external drive does that, and to prevent it from doing so. Any more ideas of what may be causing that? Specs: Windows 7 Pro, 64-bit, 2 GB RAM. Latest version of uTorrent (although it has been happening for a while now). UPDATE: i just found out that uTorrent has Disk cache options ( preferences - advanced - disk cache ). I have no idea if that may be causing problems with my external drive, hence causing the crash.

    Read the article

  • How to approach local database management via Flex/Air App

    - by Damon
    I am planning on using Flash Builder 4/Flex to write an AIR application which is primarily based around recording, storing and analyzing data. I'll be creating charts etc that need to update in almost real time. It's essential to me that the application be able to function without an internet connection so I need a local database of some variety, but I would also eventually like to build in online synchronization of the data where either database can be update each other based on newer information. Moreover, some type of encryption would definitely be welcome, and speed is a large concern With all that in mind, what is my best approach to try and avoid headaches down the road?

    Read the article

  • Using ASP.NET Session for Lifetime Management (Unity)

    - by Sigray
    I am considering using Unity to manage the lifetime of a custom user class instance. I am planning on extending the LifetimeManager with a custom ASP.NET session manager. What I want to be able to do is store and retrieve the currently logged in user object from my custom classes, and have Unity get the instance of User from the session object in ASP.NET, or (when in a Win32 project) retrieve it statically or from the current thread. So far my best solution is to create a static instance of my Unity container on startup, and use the Resolve method to get my User object from each of my classes. However, this seems to create a dependency on the unity container in my other classes. What is the more "Unity" way of accomplishing this goal? I would like to be able to read/replace the current User instance from any class.

    Read the article

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • Android: stack management for views in a tab?

    - by wei
    I see some answers here prefer views over activities as contents of tabs. Correct me if I am wrong. My understanding is that by switching out views, it's possible to keep the navigation flow inside a tab (more user friendly, I think). But I wonder how to manage the view stack then in case of the back button events. Also this could cause one giant Activity with large amount of views, which might not be good. So I wish to know why exactly views as contents is preferred before I change my current application to this. Thanks,

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Public static variables and Android activity life cycle management

    - by jsstp24n5
    According to the documentation the Android OS can kill the activity at the rear of the backstack. So, say for example I have an app and open the Main Activity (let's call it Activity A). In this public activity class I declare and initialize a public static variable (let's call it "foo"). In Activity A's onCreate() method I then change the value of "foo." From Activity A the user starts another activity within my app called Activity B. Variable "foo" is used in Activity B. Activity B is then paused after the user navigates to some other activities in other apps. Eventually, after a memory shortage occurs, Activity A then Activity B can be killed. After the user navigates back to my app it restarts (actually "recreates") activity B. What happens: 1) Will variable "foo" at this point have the value that was set to it when Activity A's onCreate() method ran? 2) Variable "foo" does not exist? 3) Variable "foo" exists and but is now the initialized value and not the value set in Activity A's onCreate() method?

    Read the article

  • Computer generated files - how do they work ?

    - by hory.incpp
    Hello, .... ‹BÿЃÀ‰D$Ç„$  ....... that's what happens when you open (notepad) such a file that I'm talking about How do algorithms decode that information and when does a program use/generate it ? Does some notepad-like application exist that open such files and transform them to readable code/data ? Any more information which will clarify about these files will be very helpful. Thank you for your time, P.S I'm not talking strictly about .exe files

    Read the article

  • .Net programming on a Apple computer

    - by VoodooChild
    Hello, I really would like to write an app or apps for iPhone / iPad. I've never done this so far because most of my work has been in windows environment. I recently got an i7 with windows 7 and love it, and this is what I am using to do development on currently. I would love to try out writing a simple App on a mac for either an iPhone or iPad. The question I had was are there any developers using a macbook to do windows based programming as well as writing apps? And what is their setups like? (example: using bootcamp) Most importantly, is it recommended based on the experience they had doing so? any problems / performance issue? These are the concerns I have to address before justifying spending time and money on this. Thanks, Voodoo

    Read the article

  • Memory management in iphone cocos2d

    - by muthu
    i am iphone developer very new to this field....i am developing a ebook app in iphone using cocos2d...i use more than 150 images(i guess) the problem while turning from one page to another images get hanged randomly...... i tried this also [[TextureMgr sharedTextureMgr] removeAllTextures]; but went in vain...i guess the the problem is with the memory.....this my coding for all the pages -(id)init { if( (self=[super init] )) { self.isTouchEnabled = YES; [SimpleAudioEngine sharedEngine]; NSLog(@"b4 cover"); Sprite *bg1 = [Sprite spriteWithFile:@"a.jpg"]; bg1.anchorPoint = CGPointZero; [self addChild:bg1 z:-1]; once = TRUE; soundId = [[SimpleAudioEngine sharedEngine] playEffect:@".mp3"]; } return self; } -(void) transitionfront:(id) sender { [[SimpleAudioEngine sharedEngine] stopEffect:soundId]; soundId1 = [[SimpleAudioEngine sharedEngine] playEffect:@"page_turn.mp3"]; flip = [[Sprite spriteWithFile:@"a.jpg"] retain]; [self addChild: flip z:1]; [flip setPosition:ccp(160,240)]; Animation* animation1 = [Animation animationWithName:@"Page1" delay:0.09]; for( int i=1;i<4;i++) [animation1 addFrameWithFilename: [NSString stringWithFormat:@".jpg", i]]; id action = [Animate actionWithAnimation: animation1]; //id action = [RepeatForever actionWithAction:[Animate actionWithAnimation: animation1]]; [flip runAction:action]; [NSTimer scheduledTimerWithTimeInterval:0.3 target:self selector:@selector(moveforward) userInfo:nil repeats:NO]; } -(void) moveforward { [[SimpleAudioEngine sharedEngine] stopEffect:soundId1]; [[Director sharedDirector] replaceScene: [ [Scene node] addChild: [nextpage node] z:0] ]; } -(void) transitionback:(id) sender { [[SimpleAudioEngine sharedEngine] stopEffect:soundId]; soundId1 = [[SimpleAudioEngine sharedEngine] playEffect:@".mp3"]; flip = [[Sprite spriteWithFile:@".jpg"] retain]; [self addChild: flip z:1]; [flip setPosition:ccp(160,240)]; Animation* animation1 = [Animation animationWithName:@"Page1" delay:0.09]; for( int i=3;i>0;i--) [animation1 addFrameWithFilename: [NSString stringWithFormat:@".jpg", i]]; id action = [Animate actionWithAnimation: animation1]; //id action = [RepeatForever actionWithAction:[Animate actionWithAnimation: animation1]]; [flip runAction:action]; [NSTimer scheduledTimerWithTimeInterval:0.3 target:self selector:@selector(movebackward) userInfo:nil repeats:NO]; } -(void) movebackward{ //[[SimpleAudioEngine sharedEngine]stopEffect:@".mp3"]; [[Director sharedDirector]replaceScene:[[Scene node]addChild:[b4page node] z:0]]; } -(void) glossary :(id) sender { [[SimpleAudioEngine sharedEngine]stopEffect:soundId]; [[Director sharedDirector]replaceScene:[[Scene node]addChild:[ node] z:0]]; } -(BOOL)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint cocosTouchPoint = [touch locationInView: [touch view]]; CGPoint point = [[Director sharedDirector] convertToGL:cocosTouchPoint]; NSLog(@"pointx: %f pointy:%f", point.x, point.y); // Was a tab touched, if so, which one... if (CGRectContainsPoint(CGRectMake(220, 0, 100, 70), point)) { if(once) { NSLog(@"enterred page1"); [self transitionfront:nil]; once = FALSE; } } if (CGRectContainsPoint(CGRectMake(0,0,60,60), point)) { if(once) { NSLog(@"enterred cover"); [self transitionback:nil]; once = FALSE; } } if (CGRectContainsPoint(CGRectMake(100, 15, 30, 30), point)) { if(once){ [self glossary :nil]; once = FALSE; } } return kEventHandled; } -(void)playEffect:(NSString*)sound{ if(effectPlayer!=nil){ [effectPlayer release]; } NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:sound ofType:@"mp3"]]; effectPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil]; [effectPlayer setDelegate:self]; [effectPlayer play]; } -(void)stopEffect { [effectPlayer stop]; } -(void) dealloc{ [super dealloc]; } do pls help me........ do give me a exact coding this is the err..... *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFDictionary setObject:forKey:]: attempt to insert nil value (key: aesop.mp3)' 2010-05-27 10:43:09.834 abc[276:20b] Stack: ( 11674715, 2476006971, 11758651, 11758490, 5126917, 660698, 660881, 661061, 131577, 448857, 120432, 153433, 630890, 23694899, 23603228, 23630005, 47120081, 11459456, 11455560, 47114125, 47114322, 23633923, 9928, 9814 )

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >