Search Results

Search found 8 results on 1 pages for 'shahbaz'.

Page 1/1 | 1 

  • How do I stop ubuntu from detaching minimize/maximuze/close buttons?

    - by Shahbaz
    Some time ago I managed to get ubuntu to keep the window menubars in the menu rather than the bar above (I'm not sure if this part is unity or compiz, or what's the difference). That was by removing indicator-appmenu Anyway, so now everything is fine except one thing: If I have a window that is full screen, the minimize/maximize/close buttons are still grabbed by the bar on the top. Usually this doesn't cause a problem because the upper-left corner of the full screen window and the whole screen are not too far apart. However, one thing happens to me a lot, and that is I am working on something (programming), then I need to check some things from other places so I open some windows, see what I want and switch back to my work. Those windows however are temporary so at some point I want to close them. Now here's what happens: I have the focus on some window and I can't close the maximized window behind it unless I click on the window first, so that the buttons appear and then close it. I couldn't find anything on the internet about this. Is this something that's hardcoded in unity/compiz/whatever or is there actually a way to configure this?

    Read the article

  • Why would I learn C++11, having known C and C++?

    - by Shahbaz
    I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases that change how different features behave in different circumstances, placing restrictions on multiple inheritance, adding lambda functions, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? Update: My original question in short was: "I like C++, but the new C++11 doesn't look good because of this and this and this. However, deep down something tells me I need to learn it. So, I asked this question here so that someone would help convince me to learn it." However, the zealous people here can't tolerate pointing out a flaw in their language and were not at all constructive in this manner. After the moderator edited the question, it became more like a "So, how about this new C++11?" which was not at all my question. Therefore, in a day or too I am going to delete this question if no one comes up with an actual convincing argument. P.S. If you are interested in knowing what flaws I was talking about, you can edit my question and see the previous edits.

    Read the article

  • Where can I learn about managing domain names for my websites? [closed]

    - by Shahbaz
    [I originally asked this question on serverfault.com, where it was closed as 'out of scope.' Hopefully it is appropriate for this forum] I am a developer who doesn't understand how to effectively manage Internet domain names. Say I registered a name with namecheap and host a website on linode. Now what is an a-record? What is a name server and do I host it with namecheap of linode? Why would I pay amazon when others are free? Does any of this matter in terms of website latency or reliability? I feel like a script kiddy, copying and pasting others' and hoping it works. Is there a book or other resource that explains all this? I know amazon is full of books about DNS, but afaik they are about setting up DNA servers for local networks, not the Internet. p.s. To emphasize, I'm asking for books or long write-ups which explain this to technically competent people, who just haven't had to think about the role of commercial registrars, name servers, commercial hosts, commercial websites and how all parts play together on the real internet (not local networks).

    Read the article

  • How to apply a filter to the screen of a running program?

    - by Shahbaz
    The idea is to take old games without modifying them, but have the graphics card apply a series of filters to their output before sending them to the monitor. A very crude example would be to take a game that has a resolution of 640x480 and do: Increase the resolution to 1280x960 Apply a blur (low pass filter) Apply a sharpen (1 + high pass filter) These steps may not necessarily be the best to improve the visuals of an old game, but there are a lot of techniques that are well-known in image processing for this purpose. The question is, do the (NVidia) graphics cards give the ability to load a program that modifies the screen before sending it to the monitor? If so, how are they called and what terminology should I use to search? I would be comfortable with doing the programming myself if this ability is part of a library. Also, would the solution be different between Windows and Linux? If so, either is fine, since most of the games are probably runnable by wine.

    Read the article

  • How to convert a Java object (bean) to key-value pairs (and vice versa)?

    - by Shahbaz
    Say I have a very simple java object that only has some getXXX and setXXX properties. This object is used only to handle values, basically a record or a type-safe (and performant) map. I often need to covert this object to key value pairs (either strings or type safe) or convert from key value pairs to this object. Other than reflection or manually writing code to do this conversion, what is the best way to achieve this? An example might be sending this object over jms, without using the ObjectMessage type (or converting an incoming message to the right kind of object).

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • Node.js for lua?

    - by Shahbaz
    I've been playing around with node.js (nodejs) for the past few day and it is fantastic. As far as I can tell, lua doesn't have a similar integration of libev and libio which let's one avoid almost any blocking calls and interact with the network and the filesystem in an asynchronous manner. I'm slowly porting my java implementation to nodejs, but I'm shocked that luajit is much faster than v8 JavaScript AND uses far less memory! I imagine writing my server in such an environment (very fast and responsive, very low memory usage, very expressive) will improve my project immensly. Being new to lua, I'm just not sure if such a thing exists. I'll appreciate any pointers. Thanks

    Read the article

  • running Hadoop software on office computers (when they are idle)

    - by Shahbaz
    Is there a project which helps setup a Hadoop cluster on office desktops, when they are idle? I'd like to experiment with Hadoop/MR/hbase but don't have acces to 5-10 computers. The computers at work are idle after hours and are connected to each other through a very high speed connection. What's more, data on these computers stays within our network so there is no privacy issue. In order for this to work I need a fairly light weight monitor running on each machine. When the computer has been idle for X hours, it will join the cluster. If the user logs on, it has to drop out of the cluster and return all CPU/memory back. Does something like this exist?

    Read the article

1