Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 44/832 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • DHCP server with database backend

    - by Cory J
    I have been looking around for something to replace my (ancient) ISC-DHCPd server. A DHCP server with a database backend sounds like a great idea to me, as I could then have a nice, friendly web interface to my server. Surprisingly, I can't any major open-source projects that offer this. Does anyone know of one? I have also read about modifying ISC to use a database backend...can anyone tell me if this solution is stable enough for a busy production server? Or is using a database a Bad Idea™ all together? PS - http://stackoverflow.com/questions/893887/dchp-with-database-backend looks like SO couldn't answer this old, similar question. EDIT: I am looking for something on a free OS platform, Linux or BSD. If there is something absolutely great that is Windows-only though, still interested.

    Read the article

  • What is the safest way to remove a swap partition?

    - by user212062
    I am running Ubuntu 12.04 on a 64-bit HP laptop with a 16 GB flash drive. I do not have a working hard drive right now. When I installed Ubuntu, I created a 2 GB swap partition on sdb1. I have since learned that swap partitions are generally a bad idea on flash drives, so I would like to use my swap space for my other partitions. You can see my partition scheme in the link below. I have read that I just have to comment sdb1 out of the fstab file, boot from a GParted live CD, select swapoff for sdb1, delete/merge with other partition, and everything's good. But, I've also read that messing with sdb1 can change the UUID of sdb2 or sdb3 and cause problems. Is this true? Does initramfs use swap at all? Also, when I get Ubuntu running on my laptop with an internal hard drive, does the swap partition help that much? I have 6 GB of DDR3. Does the rule of 1.5xActual RAM still apply? It seems like quite a bit to me. Thanks for the help! UPDATE: I have removed swap. The process I followed is: Right click swap partition in GParted and selected swapoff. Used # to comment the swap partition out of fstab. I tried to boot from a live GParted CD, but I kept getting an error, so I ran GParted in Ubuntu. Deleted swap partition in GParted. Unmounted /windows. Expanded /windows to take the remaining space. Mounted /windows. The / and /windows partitions each kept their own names and UUIDs, and everything is running fine. I have never seen any swap space being used before, and I don't intend to use the hibernate function, so I think removing swap was a good idea.

    Read the article

  • Extending an ABCS

    - by jamie.phelps
    All AIA Application Business Connector Services (ABCS) are extension enabled out of the box. The number and location of extension points in each ABCS is dependent upon whether the ABCS is a request-response or fire-and-forget service. Below is an example of a request-reply ABCS with 4 extension call-out points: Pre-transformationPost-transformation, Pre-invokePost-invoke, Pre-transformationPost-transformation, Pre-reply You can also see in the diagram that each XSL Transformation has it's own extension call-out. However for now we are only discussing the ABCS extension call-outs. To extend an ABCS, you'll first need to identify the specific extension points that are available in your ABCS and choose the one or more that you want to implement. You can an get an idea of the extension points available in your ABCS by looking into the AIAConfigurationProperties.xml file found under the AIA_HOME/config directory. Find the for your ABCS and look for properties similar to the following: false false false false Each extension point in the ABCS will have a corresponding configuration property to control whether or not the extension call-out is active at runtime. So these properties can give you some idea of what extension points are available in your ABCS. However, you'll probably also want to look into the ABCS BPEL code itself to confirm the exact location of the call-out.

    Read the article

  • AJAX driven "page complete" function? Am I doing it right?

    - by Julian H. Lam
    This one might get me slaughtered, since I'm pretty sure it's bad coding practice, so here goes: I have an AJAX driven site which loads both content and javascript in one go using Mootools' Request.HTML. Since I have initialization scripts that need to be run to finish "setting up" the template, I include those in a function called pageComplete(), on every page Visiting one page to another causes the previous pageComplete() function to no longer apply, since a new one is defined. The javascript function that loads pages dynamically calls pageComplete() blindly when the AJAX call is completed and is loaded onto the page: function loadPage(page, params) { // page is a string, params is a javascript object if (pageRequest && pageRequest.isRunning) pageRequest.cancel(); pageRequest = new Request.HTML({ url: '<?=APPLICATION_LINK?>' + page, evalScripts: true, onSuccess: function(tree, elements, html) { // Empty previous content and insert new content $('content').empty(); $('content').innerHTML = html; pageComplete(); pageRequest = null; } }).send('params='+JSON.encode(params)); } So yes, if pageComplete() is not defined in one the pages, the old pageComplete() is called, which could potentially be disastrous, but as of now, every single page has pageComplete() defined, even if it is empty. Good idea, bad idea?

    Read the article

  • Should a stack trace be in the error message presented to the user?

    - by Vilx-
    I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?

    Read the article

  • Are the Ubuntu ISO images updated From release .ubuntu.com

    - by tijybba
    Just got idea from this(may not be related though) question however. Are the ISO images from the official site updated with updates in Core Ubuntu system , like Kernel Updates , desktop Environment Updates(unity), i mean Updates of BASE system including X-org, Office suite, Package Manager , Update manager or Gnome Base Modules, those released in update Branches like precise-Updates branch. The reason i am asking this is , if i download the ISO image of Ubuntu 12.04 Say after two or three months from release , i have to do an update of approximately 200~300 MB's size. So why are these ISO images not updated to recent updates, i am aware that all of the components are not updated at the same time , but let's say after One month from actual release ( Both LTS and normal releases), the updated components can be added to form a Updated ISO in regular intervals, which provides new users to use latest versions and features with improved stability and less bandwidth Consumption. I am not mentioning the idea of comparison to Rolling Release , or External PPA's added updates , and neither Netinstal but the ISO of updated Packages .This can be provided as optional download. Since my question is within the boundary of Official updates releases so stability could not be the reason. I guess there are custom packagers out there , but having an official option would be better. It helps in distributing Newest ISO OS which impresses a lot new users , since it makes availability of newer features and a faster system ofcourse. Another reason of asking this is here. Edit: Since almost all new (Desktop) users download the Default ISO's having one or few issue , which may have been corrected in following updates. But most of new laptop users i encountered gave up because of it , so should i suggest ,for laptop not listed on Certified H/W list , to try daily Builds , if needed.

    Read the article

  • Strategies for managUse of types in Python

    - by dave
    I'm a long time programmer in C# but have been coding in Python for the past year. One of the big hurdles for me was the lack of type definitions for variables and parameters. Whereas I totally get the idea of duck typing, I do find it frustrating that I can't tell the type of a variable just by looking at it. This is an issue when you look at someone else's code where they've used ambiguous names for method parameters (see edit below). In a few cases, I've added asserts to ensure parameters comply with an expected type but this goes against the whole duck typing thing. On some methods, I'll document the expected type of parameters (eg: list of user objects), but even this seems to go against the idea of just using an object and let the runtime deal with exceptions. What strategies do you use to avoid typing problems in Python? Edit: Example of the parameter naming issues: If our code base we have a task object (ORM object) and a task_obj object (higher level object that embeds a task). Needless to say, many methods accept a parameter named 'task'. The method might expect a task or a task_obj or some other construct such as a dictionary of task properties - it is not clear. It is them up to be to look at how that parameter is used in order to work out what the method expects.

    Read the article

  • Konami Code on WordPress

    - by ninjaboi21
    Hey SuperUsers, I would love to get a Konami Code trigger on my WordPress blog, but I really have no idea how to implement it... For those of you who have no idea what Konami Code is, here is a link: http://en.wikipedia.org/wiki/Konami_Code It's a series of steps a user can use on a site and something pops up, user defined. What I would love to, is if someone tries the Konami Code on my website, everything shall fade away and some text shall appear instead. How do you do that? Here's an example of it: zeno.name Thanks in advance!

    Read the article

  • Run script when POST data is sent to Apache

    - by Nathan Adams
    Among my several years of running servers there seems to be a pattern with most spam activity. My question/idea is that is there a way to tell Apache to run a script when POST data is detected? What I would want to do is perform a reverse DNS lookup on the client's IP address, and then perform a DNS lookup on the hostname in the PTR record. Afterwards, perform some checks, excuse the pseudo-code: if PTR does not exist: deny POST request if IP of PTR hostname = client's IP Allow POST request else deny POST request Though I don't care about GET requests, even though they can be just as malicious, this idea is targeted towards spam comments which use POST data to send the comment data to the web server. In order to make sure there isn't much of a time delay, I would run my own recursive DNS server. Please do note, this isn't meant to be a sliver bullet to spam, but it should decrease the volume. Possible or impossible?

    Read the article

  • How to represent an agile project to people focused on waterfall [closed]

    - by ahsteele
    Our team has been asked to represent our development efforts in a project plan. No one is unhappy with our work or questioning our ability to deliver, we are just participating in an IT cattle call for project plans. Trouble is we are an agile team and haven't thought about our work in terms of a formal project plan. While we have a general idea of what we are working on next we aren't 100% sure until we plan an iteration. Until now our team has largely operated in a vacuum and has not been required to present our methodology or metrics to outside parties. We follow most of the practices espoused in Extreme Programming. We hold quarterly planning meetings to have a general idea of the stories we are going to work on for a quarter. That said, our stories are documented on 3x5 cards and are only estimated at the beginning of the iteration in which they are going to be worked. After estimation we document the story in Team Foundation Sever. During an iteration, we attach code to stories and mark stories as completed once finished. From this data we are able to generate burn down and velocity charts. Most importantly we know our average velocity for an iteration keeping us from biting off more than we can chew. I am not looking to modify the way we do development but want to present our development activities in a report that someone only familiar with waterfall will understand. In What Does an Agile Project Plan Look Like, Kent McDonald does a good job laying out the differences between agile and waterfall project plans. He specifies the differences in consumable bullets: An agile project plan is feature based An Agile Project Plan is organized into iterations An Agile Project Plan has different levels of detail depending on the time frame An Agile Project Plan is owned by the Team Being able to explain the differences is great, but how best to present the data?

    Read the article

  • SSH tunnel & Rsync thru two proxy/firewalls

    - by cajwine
    Screnario: [internal_server_1]AA------AB[firewall_1]AC----+ 10.2.0.3-^ ^-10.2.0.2 | internet 10.3.0.3-v v-10.3.0.2 | [internal_server_2]BA------BB[firewall_2]BC----+ Ports AC,BC has valid internet addresses. All systems run linux and have root acces to all. Need securely rsync internal_server_1:/some/path into internal_server_2:/another/path My idea is make ssh secure tunnel between two firewalls, e.g. from firewall_1 firewall1# ssh -N -p 22 -c 3des user2@firewall_2.example.com -L xxx/10.3.0.3/xxxx and after will run rsync from internal_server_1 somewhat like: intenal1# rsync -az /some/path [email protected]:/another/path I don't know how to make a correct ssh tunnel for rsync (what ports need tunnel) and to where i will make the rsync? (remote comp address in case of ssh tunnel) Any idea or pointer to helpfull internet resource for this case? thanx.

    Read the article

  • Go/Obj-C style interfaces with ability to extend compiled objects after initial release

    - by Skrylar
    I have a conceptual model for an object system which involves combining Go/Obj-C interfaces/protocols with being able to add virtual methods from any unit, not just the one which defines a class. The idea of this is to allow Ruby-ish open classes so you can take a minimalist approach to library development, and attach on small pieces of functionality as is actually needed by the whole program. Implementation of this involves a table of methods marked virtual in an RTTI table, which system functions are allowed to add to during module initialization. Upon typecasting an object to an interface, a Go-style lookup is done to create a vtable for that particular mapping and pass it off so you can have comparable performance to C/C++. In this case, methods may be added /afterwards/ which were not previously known and these new methods allow newer interfaces to be satisfied; while I like this idea because it seems like it would be very flexible (disregarding the potential for spaghetti code, which can happen with just about any model you use regardless). By wrapping the system calls for binding methods up in a set of clean C-compatible calls, one would also be able to integrate code with shared libraries and retain a decent amount of performance (Go does not do shared linking, and Objective-C does a dynamic lookup on each call.) Is there a valid use-case for this model that would make it worth the extra background plumbing? As much as this Dylan-style extensibility would be nice to have access to, I can't quite bring myself to a use case that would justify the overhead other than "it could make some kinds of code more extensible in future scenarios."

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • .htaccess url rewriting problem

    - by letsworktogether
    I'm kind of stuck at this part and was hoping that I'd get some assistance. I'm building a highscores page in PHP, that's going great, it works. however, I dislike the idea of "index.php?skill=name" and therefore wanted a bit of SEO in this. I have successfully replaced the url with a more friendly version: "highscores/skill/name" And this is where the problem starts, I have added pagination to the highscores and the page is read from the HTTP_GET page variable ($_GET['page']). I dislike the idea of "highscores/skill/name&page=2" and was hoping if you guys could assist me to make the url like the following: Page 1, so accessing the file without declaring the page number: DOMAIN.TLD/highscores/skill/name Page 1 so now the page variable is needed:DOMAIN.TLD/highscores/skill/name/2 As you can tell the "2" will define page 2 and load the correct data for page 2. However, I'm having much trouble in my .htaccess file to configure it this way. RewriteRule ^highscores\/skill\/(.*?)(\/(.*?)*)$ highscores/skills.php?skill=$1&page=$2 [L] # Skills page That is my latest attempt in order to get it to work, unfortunately it does not work, it makes the page look horrible (CSS doesn't work) and it doesn't go to the page specified on the URL. I hope you understand my issue, thank you!

    Read the article

  • How to write a blog for SEO purpose

    - by Mathieu Imbert
    I have a photo sharing website, which provides very little textual content. Users can add tags to photos and a description, but it creates a lot of duplicate content, because most of the descriptions will be 'wow', 'lol', ... I don't think I should rely on users to build my SEO. I think it would be a great idea to write a blog, and use it to describe the best photos, start contests, explain themes, in short: create original content that search engines will love. Our website's main URL is like www.domain.com, and our new blog is hosted on blog.domain.com. From a SEO perspective, is it a good idea to keep the blog separate from the main site? This has the advantage to leave the original site unchanged, but will it add any page rank to the www.domain.com? If the blog ranks well it will obviously pass some page rank to the original through links. What do you think is the best option from a SEO perspective? Include the blog in www.domain.com? Or leave it in blog.domain.com?

    Read the article

  • Electronics 101 for kids: littleBits review

    - by Daniel Cazzulino
    I'm always on the lookout for cool toys that can empower my kids (9, 6 and 2yo) to be creative and break the mold of being just consumers of other's ideas. I recently came across littleBits while watching a TED video of their creator. I was immediately hooked. It seemed like the perfect blend of simplicity and self-learning that I was looking to get my kids into electronics. So I went ahead and purchased the kit from SparkFun and a bunch of standalone parts (&quot;bits&quot;) from the site itself. There are also a bunch of videos and pictures on their site to get a better idea of what they are, as well as multitude YouTube videos. This weekend I gave them to my kids, and coincidentally, we also travelled to my hometown and they got to share them too with their cousins. Man, what a blast it was! I decided to approach this &quot;toy&quot; just like one of the iPod/iPad games I buy for them: &quot;How it's used? I've no idea! I just heard it was great, you go figure it out!&quot;. And figure it out they did....Read full article

    Read the article

  • Use Mac Pro as time machine, server and editing station?

    - by Dan
    Background: My fiancée needs a Mac Pro for movie editing and rendering. I need a web server and a backup solution for my MacBook Pro. Idea We thought we could split the costs of the Mac Pro and set it up to act as both a web server and a backup device. Question Is this a good idea? Specifically: Is it easy to set it up to incrementally backup one or several laptops over wifi? And what software would you recommend? Is it silent and stable enough to run a web server continuously? Will it manage all this, including simultaneous editing? Thanks.

    Read the article

  • Why is Desktop Unity using the global application menu?

    - by Kazade
    It was announced in another question that the desktop version of Unity will keep the global menu by default. Here are the facts: The global menu was introduced into UNE to save vertical screen space because at Netbook resolutions the vertical space is limited. On a modern desktop with a high resolution, there is ample vertical space making this unnecessary On the announcement of UNE global menus, Mark Shuttleworth himself said the following: "There are outstanding questions about the usability of a panel-hosted menu on much larger screens, where the window and the menu could be very far apart." The benefits of a global menu don't seem to carry across to a high-resolution desktop and instead seem to bring draw backs (increased mouse travel, large distance between the menu and its associated window). The other worrying factor is that applications seem to be moving away from having a menu bar, and instead of innovating on this and defining new guidelines for moving away from the menu, we are giving it prime place right at the top of the desktop. If applications continue moving away from the desktop we will have an inconsistent experience concerning where to locate application related options/tools depending on which app you are using (e.g. Chrome). Finally, the current global menu bar implementation doesn't work for all apps, and doesn't even work for all apps in the default install. This means that the default desktop implementation will be inconsistent. So, there are a bunch of reasons why moving to a global menu is a bad idea, so we need some pretty convincing arguments for why it is a good idea. What are the reasons for the global menu implementation in the desktop version of Unity?

    Read the article

  • Event notifications for Reporting Systems

    - by Marc Schluper
    The last couple of months I have been working on an application that allows people to browse a data mart. Nice, but nothing new. In this context I have an idea that I want to publish before anyone else patents it: event notifications. You see, reporting systems are not used as much as we’d like. Typically, users don’t know where to look for reports that might interest them. At best, there are some standard reports that people generate every so often, i.e. based on a time trigger. Or some reporting systems can be configured to send monthly reports around, for convenience. But apart from that, the reporting system is just sitting there, waiting for the rare curious user who makes the effort to dig a bit for treasures to be found. Wouldn’t it be great if there were data triggers? Imagine we could configure the reporting system to let us know when something interesting has happened. It would send us a message containing a link that would take us to the relevant section of the reporting system, showing a report with all the data pertaining to that event, preparing us for proper actions. Here in the North West this would really be great. You see, it rains here most of the time from October to June, so why even check the weather forecast? But sometimes, sometimes it snows. And sometimes the sun shines. So rather than me going to the weather site and seeing over and over again that it will be raining, making me think “why bother?” I’d like to configure the weather site so that it lets me know when the rain stops. Now, hopefully nobody has patented this idea already. Let me know.

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • Proportional speed movement between mouse and cube

    - by user1350772
    Hi i´m trying to move a cube with the freeglut mouse "glutMotionFunc(processMouseActiveMotion)" callback, my problem is that the movement is not proportional between the mouse speed movement and the cube movement. MouseButton function: #define MOVE_STEP 0.04 float g_x=0.0f; glutMouseFunc(MouseButton); glutMotionFunc(processMouseActiveMotion); void MouseButton(int button, int state, int x, int y){ if(button == GLUT_LEFT_BUTTON && state== GLUT_DOWN){ initial_x=x; } } When the left button gets clicked the x cordinate is stored in initial_x variable. void processMouseActiveMotion(int x,int y){ if(x>initial_x){ g_x-= MOVE_STEP; }else{ g_x+= MOVE_STEP; } initial_x=x; } When I move the mouse I look in which way it moves comparing the mouse new x coordinate with the initial_x variable, if xinitial_x the cube moves to the right, if not it moves to the left. Any idea how can i move the cube according to the mouse movement speed? Thanks EDIT 1 The idea is that when you click on any point of the screen and you drag to the left/right the cube moves proportionally of the mouse mouvement speed.

    Read the article

  • Debian Linux server hangs after a week or so

    - by Alex Flo
    I have 2 Debian Linux 6.0.4 servers that have a strange behaviour: after 5-7-10 days they hang. By this I mean the servers need to be restarted and before that ping won't answer. I've been struggling with this problem for a couple of months now and here's some thoughts/what I tried without being able to solve the problem. I changed the RAM on a server. Being 2 different servers I doubt that it could be something related to hardware as a 3rd identical server won't have this problem. I logged the server load and when it crashes the load is fine (quite low) I cannot find anything in the server logs, logs are fine till the server freezes. I don't have access to console unfortunately. While I have years of admin experience I have never encountered such an issue and right now I have no idea where else to investigate. If you have an idea of what I could try in order to fix the problem please share it with me:-)

    Read the article

  • Switching to workspaces view shows buggy blue background

    - by G1i1ch
    When switching to workspaces view everything the background turns blue. Same happens when I switch between multiple windows of the same app too. Happening after having a try with gnome shell out of curiosity. Installed through official repos like normal. Tried it out but switched back, anyone have an idea of why this is happening? Got an Intel GPU and Unity 3d. If anyone can give me some direction, thanks a lot. Update: Looks like during the switch somehow opengl was disabled. glxinfo returns: name of display: :0 Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". Xlib: extension "GLX" missing on display ":0". This is very distressing, all I wanted to do was try out gnome3. Does anyone have an idea on how I can get opengl back? [update] Ok I found out what happened. Apparently it's not that hard to accidentally install an nvidia driver. All I had to do was remove all the stuff that had nvidia in it and I got 3d back! For anyone else, this is how I found out: check out Xorg.0.log `$ cat / var/log/Xorg.0.log | more` if you see a line like this somewhere `(EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found)` you got an nvidia problem Thanks everyone for trying to help :D

    Read the article

  • Is an event loop just a for/while loop with optimized polling?

    - by Alan
    I'm trying to understand what an event loop is. Often the explanation is that in the event loop, you do something until you're notified that an event occurred. You than handle the event and continue doing what you did before. To map the above definition with an example. I have a server which 'listens' in a event loop, and when a socket connection is detected, the data from it gets read and displayed, after which the server goes to the listening it did before. However, this event happening and us getting notified 'just like that' are to much for me to handle. You can say: "It's not 'just like that' you have to register an event listener". But what's an event listener but a function which for some reason isn't returning. Is it in it's own loop, waiting to be notified when an event happens? Should the event listener also register an event listener? Where does it end? Events are a nice abstraction to work with, however just an abstraction. I believe that in the end, polling is unavoidable. Perhaps we are not doing it in our code, but the lower levels (the programming language implementation or the OS) are doing it for us. It basically comes down to the following pseudo code which is running somewhere low enough so it doesn't result in busy waiting: while(True): do stuff check if event has happened (poll) do other stuff This is my understanding of the whole idea, and i would like to hear if this is correct. I am open in accepting that the whole idea is fundamentally wrong, in which case I would like the correct explanation. Best regards

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >