Search Results

Search found 43847 results on 1754 pages for 'command line arguments'.

Page 600/1754 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Visual Studio LightSwitch: Yes, these are the droids you&rsquo;re looking for

    - by Jim Duffy
    With all the news and focus on the new features coming in Silverlight 5 I thought I’d take a few minutes to remind folks about the work that Microsoft has done on LightSwitch since the applications created by LightSwitch are Silverlight applications. LightSwitch makes it easier for non-coders to build business applications and easier for coders to maintain them. For those not familiar with LightSwitch, it is a new tool that provides a easier and quicker way for coder and non-coder types alike to create line-of-business applications for the desktop, the web, and the cloud. The target audience for this tool are those power-user types who create Access applications for their organization. While those Access applications fill an immediate need, they typically aren’t very scalable, extendable and/or maintainable by the development staff of the organization. LightSwitch creates applications based on technologies built into Visual Studio thus making it easier for corporate developers to extend and maintain them. LightSwitch is currently in beta but it will ultimately become a new addition to the Visual Studio line of products. Go ahead and download the beta to get a better idea of what the product can do for your organization. The LightSwitch Developer Center contains links to download the beta links to instructional videos links to tutorials links to the LightSwitch Training Kit Another quality resource for LightSwitch information is the Visual Studio LightSwitch Team Blog. My good friend Beth Massi is on the LightSwitch team and has additional valuable content on her blog. Have a day.

    Read the article

  • I can't add PPA repository behind the proxy (with @ in the username)

    - by kenorb
    I'm trying to add the ppa repository (as a root) with the following command: export HTTP_PROXY="http://[email protected]:[email protected]:8080" add-apt-repository ppa:nilarimogard/webupd8 Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 125, in <module> ppa_info = get_ppa_info_from_lp(user, ppa_name) File "/usr/lib/python2.7/dist-packages/softwareproperties/ppa.py", line 84, in get_ppa_info_from_lp curl.perform() pycurl.error: (56, 'Received HTTP code 407 from proxy after CONNECT') Unfortunately it doesn't work. Looks like curl is connecting to the proxy, but the proxy says that Authentication is Required. I've tried with .curlrc, http_proxy env instead, but it doesn't work. strace -e network,write -s1000 add-apt-repository ppa:nilarimogard/webupd8 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 4 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("165.x.x.232")}, 16) = -1 EINPROGRESS (Operation now in progress) getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("165.x.x.232")}, [16]) = 0 getsockname(4, {sa_family=AF_INET, sin_port=htons(46025), sin_addr=inet_addr("161.20.75.220")}, [16]) = 0 sendto(4, "CONNECT launchpad.net:443 HTTP/1.1\r\nHost: launchpad.net:443\r\nUser-Agent: PycURL/7.22.0\r\nProxy-Connection: Keep-Alive\r\nAccept: application/json\r\n\r\n", 146, MSG_NOSIGNAL, NULL, 0) = 146 recvfrom(4, "HTTP/1.1 407 Proxy Authentication Required\r\nProxy-Authenticate: BASIC realm=\"proxy\"\r\nCache-Control: no-cache\r\nPragma: no-cache\r\nContent-Type: text/html; charset=utf-8\r\nProxy-Connection: close\r\nSet-Cookie: BCSI-CS-91b9906520151dad=2; Path=/\r\nConnection: close\ Maybe it's because there is @ sign in the username? Wget works with proxy fine. Related: How do I add a repository from behind a proxy? Environment Ubuntu 12.04 curl 7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 curl Features: GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP

    Read the article

  • Help with a simple incremental backup script

    - by Evan
    I'd like to run the following incomplete script weekly in as a cron job to backup my home directory to an external drive mounted as /mnt/backups #!/bin/bash # TIMEDATE=$(date +%b-%d-%Y-%k:%M) LASTBACKUP=pathToDirWithLastBackup rsync -avr --numeric-ids --link-dest=$LASTBACKUP /home/myfiles /mnt/backups/myfiles$TIMEDATE My first question is how do I correctly set LASTBACKUP to the the the directory in /backs most recently created? Secondly, I'm under the impression that using --link-desk will mean that files in previous backups will not will not copied in later backups if they still exist but will rather symbolically link back to the originally copied files? However, I don't want to retain old files forever. What would be the best way to remove all the backups before a certain date without losing files that may think linked in those backups by currents backups? Basically I'm looking to merge all the files before a certain date to a certain date if that makes more sense than the way I initially framed the question :). Can --link-dest create hard links, and if so, just deleting previous directories wouldn't actually remove linked file? Finally I'd like to add a line to my script that compresses each newly created backup folder (/mnt/backups/myfiles$TIMEDATE). Based on reading this question, I was wondering if I could just use this line gzip --rsyncable /backups/myfiles$TIMEDATE after I run rsync so that sequential rsync --link-dest executions would find already copied and compressed files? I know that's a lot, so many thanks in advance for your help!!

    Read the article

  • HUGE EF4 Inheritance Bug

    - by djsolid
    Well maybe not for everyone but for me is definitely really important. That is why I get straight into the point. We have the following model: Which maps to the following database: We are using EF4.0 and we want to load all Burgers including BurgerDetails. So we write the following query: But it fails. The error is : “The ResultType of the specified expression is not compatible with the required type. The expression ResultType is 'Transient.reference[SampleEFDBModel.Food]' but the required type is 'Transient.reference[SampleEFDBModel.Burger]'.Parameter name: arguments[0]”   So in the new version of EF there is no way to eager load data through Navigation Properties with 1-1 relationships defined in subclasses. Here is the relevant Microsoft Connect Issue. It is described through an other example but the result is the same.  Please if you think this is important vote up on Microsoft Connect.   EF 4.0 has many improvements. I am using it since v1 in large-scale projects and this version is faster,produces cleaner sql, more reliable and can be used for complicated business scenarios. That is why I believe this issue should be solved as soon as possible. I understand that release cycles are slow but I am hoping atleast for a hotfix. I also have uploaded the example project so you can test it. Download it from here. If anyone has found any workarounds please post it in the comments section. Thanks!

    Read the article

  • How to make a file load in my program when a user double clicks an associated file.

    - by Edward Boyle
    I assume in this article that file extension association has been setup by the installer. I may address file extension association at a later date, but for the purpose of this article, I address what sometimes eludes new C# programmers. This is sometimes confusing because you just don’t think about it — you have to access a file that you rarely access when making Windows forms applications, “Program.cs” static class Program { /// /// The main entry point for the application. /// [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } There are so many ways to skin this cat, so you get to see how I skinned my last cat. static class Program { /// /// The main entry point for the application. /// [STAThread] static void Main(string[] args) { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Form1 mainf = new Form1(); if (args.Length > 0) { try { if (System.IO.File.Exists(args[0])) { mainf.LoadFile= args[0]; } } catch { MessageBox.Show("Could not open file.", "Could not open file.", MessageBoxButtons.OK, MessageBoxIcon.Information); } } Application.Run(mainf); } } It may be easy to miss, but don’t forget to add the string array for the command line arguments: static void Main(string[] args) this is not a part of the default program.cs You will notice the mainf.LoadFile property. In the main form of my program I have a property for public string LoadFile ... and the field private string loadFile = String.Empty; in the forms load event I check the value of this field. private void Form1_Load(object sender, EventArgs e) { if(loadFile != String.Empty){ // The only way this field is NOT String.empty is if we set it in // static void Main() of program.cs // LOAD it however it is needed OpenFile, SetDatabase, whatever you use. } }

    Read the article

  • BIP Debugging to a file

    - by Tim Dexter
    If you use the standalone server or with OBIEE and use OC4J as the web server. Have you ever taken a looksee at the console window (doc/xterm) that you use to start it. Ever turned on debugging to see masses of info flow by that window and want to capture it all? I have been debugging today and watched all that info fly by and on Windoze gets lost before you can see it! The BIP developers use the System.out.println() and System.err.println()methods in the BIP applications to generate debugging formation. Normally the output from these method calls go to the console where the OC4J process is started. However you can specify command line options when starting OC4J to direct the stdout and stderr output directly to files. The ?out and ?err parameters tell OC4J which file to direct the output to. All you need do is modify the oc4j.cmd file used to start BIP. I didnt get fancy and just plugged in the following to the file under the start section. I just modified the line: set CMDARGS=-config "%SERVER_XML%" -userThreads to set CMDARGS=-config "%SERVER_XML%" -out D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.out -err D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.err -userThreads Bounced the server and I now have a ballooning pair of debug files that I can pour over to my hearts content. The .out file appears to contain BIP only log info and the .err file, OBIEE messages. If you are using another web server to host BIP, just check out the user docs to find out how to get the log files to write. Note to self, remember to turn off the debug when Im done!

    Read the article

  • How to negotiate with software vendors who do not follow HL7 standards

    - by Peter Turner
    Take, for instance the "", I'd hope that anyone who has spent any time in dealing with HL7 messages knows that the "" signifies that something should be deleted. "" is not an empty string, it's not a filler etc... But occasionally, one may meet a vendor who persists in sending "" instead of just sending nothing at all. Since, I work for a small business and have an extremely flexible HL7 interface, I can ignore ""'s in received messages. But these things are adding up. Some vendors like to send custom formatted fields with psuedo-components that they leave others to interpret themselves. Some vendors send all their information in note segments and assume you're going to only show users the information they send in a monospace font. Some vendors even have the audacity to send Carriage Return Line Feeds at the end of each line of a file interface. Some vendors absolutely refuse to send decimal numbers and in-so-doing refuse to send any numbers. So, with all this crippling humanity against the simple plastic software man, how does one bend without breaking*? Or better yet, how does one fight back and still make money? *my answer is usually to create an interface for the interface and keep the HL7 processing pure, but I don't think this is the best solution

    Read the article

  • Cannot locate packages for Citrix install

    - by Noel Evans
    I'm trying to get a Citrix receiver installed on to Ubuntu 14.04 (64 bit) following Ubuntu's docs. The first line of instructions say to get these required packages: sudo apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 But if I paste in that line, I get this error: Reading state information... Done E: Unable to locate package libmotif4 E: Unable to locate package libxp6 E: Unable to locate package libxpm4 E: Unable to locate package libasound2 My repository settings are below. Is there anything I'm missing in there? Otherwise what do I need to do to install these? $ cat /etc/apt/sources.list deb cdrom:[Ubuntu 14.04 LTS _Trusty Tahr_ - Release amd64 (20140417)]/ precise main restricted deb cdrom:[Ubuntu 14.04 LTS _Trusty Tahr_ - Release amd64 (20140417)]/ trusty main restricted deb-src http://archive.ubuntu.com/ubuntu trusty main restricted #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ trusty main restricted universe multiverse #Added by software-properties deb http://security.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ trusty-updates main restricted universe multiverse #Added by software-properties deb http://archive.ubuntu.com/ubuntu/ trusty-proposed main universe restricted multiverse deb http://archive.ubuntu.com/ubuntu/ trusty-backports main universe restricted multiverse

    Read the article

  • unable to load nvidia(bumblebee) in ubuntu 14.04 (only nouveau loads)

    - by Ubuntuser
    Bumblebee stopped working on my system after upgrading to stable version of Ubuntu 14.04. DUring installation I get this error rmmod: ERROR: Module nouveau is in use Setting up bumblebee (3.2.1-90~trustyppa1) ... Selecting 01:00:0 as discrete nvidia card. If this is incorrect, edit the BusID line in /etc/bumblebee/xorg.conf.nouveau . bumblebeed start/running, process 11133 Processing triggers for initramfs-tools (0.103ubuntu4.1) ... update-initramfs: Generating /boot/initrd.img-3.14.1-031401-generic Setting up bumblebee-nvidia (3.2.1-90~trustyppa1) ... Selecting 01:00:0 as discrete nvidia card. If this is incorrect, edit the BusID line in /etc/bumblebee/xorg.conf.nvidia rmmod: ERROR: Module nouveau is in use bumblebeed start/running, process 18284 It says nouveau is in use. I checked the loaded modules lsmod | grep nouveau nouveau 1097199 1 mxm_wmi 13021 1 nouveau ttm 85115 1 nouveau i2c_algo_bit 13413 2 i915,nouveau drm_kms_helper 52758 2 i915,nouveau drm 302817 7 ttm,i915,drm_kms_helper,nouveau wmi 19177 3 dell_wmi,mxm_wmi,nouveau video 19476 2 i915,nouveau However, I have nouveau in my blacklist cat /etc/modprobe.d/blacklist.conf | grep nouveau blacklist nouveau blacklist lbm-nouveau alias nouveau off alias lbm-nouveau off My grub is also set to nomodeset cat /etc/default/grub | grep nomodeset GRUB_CMDLINE_LINUX_DEFAULT="nomodeset quiet splash" My graphics card is nvidia optimus lspci | grep -i vga 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 01:00.0 VGA compatible controller: NVIDIA Corporation GT218M [GeForce 310M] (rev ff) I've raised a bug in launchpad: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1327598 Note: Nvidia-prime is working for me (partially). Frequent mouse locks. Interestingly, bumblebee works perfectly fine on my fedora 20 partition on this same laptop.

    Read the article

  • unable to install anything that depends upon spamassassin. Cant even install spamassasin

    - by Harbhag
    I am trying to install mailscanner using apt-get install mailscanner and I got the following error Setting up spamassassin (3.3.1-1) ... Starting SpamAssassin Mail Filter Daemon: child process [21344] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 2588. invoke-rc.d: initscript spamassassin, action "start" failed. dpkg: error processing spamassassin (--configure): subprocess installed post-installation script returned error exit status 255 dpkg: dependency problems prevent configuration of mailscanner: mailscanner depends on spamassassin (>= 3.1); however: Package spamassassin is not configured yet. dpkg: error processing mailscanner (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: spamassassin mailscanner E: Sub-process /usr/bin/dpkg returned an error code (1) and when I tried to install spamassassin I got the following error : Setting up spamassassin (3.3.1-1) ... Starting SpamAssassin Mail Filter Daemon: child process [21389] exited or timed out without signaling production of a PID file: exit 255 at /usr/sbin/spamd line 2588. invoke-rc.d: initscript spamassassin, action "start" failed. dpkg: error processing spamassassin (--configure): subprocess installed post-installation script returned error exit status 255 dpkg: dependency problems prevent configuration of mailscanner: mailscanner depends on spamassassin (>= 3.1); however: Package spamassassin is not configured yet. dpkg: error processing mailscanner (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: spamassassin mailscanner E: Sub-process /usr/bin/dpkg returned an error code (1) I am using Ubuntu Server 10.04

    Read the article

  • VLOOKUP in Excel, part 2: Using VLOOKUP without a database

    - by Mark Virtue
    In a recent article, we introduced the Excel function called VLOOKUP and explained how it could be used to retrieve information from a database into a cell in a local worksheet.  In that article we mentioned that there were two uses for VLOOKUP, and only one of them dealt with querying databases.  In this article, the second and final in the VLOOKUP series, we examine this other, lesser known use for the VLOOKUP function. If you haven’t already done so, please read the first VLOOKUP article – this article will assume that many of the concepts explained in that article are already known to the reader. When working with databases, VLOOKUP is passed a “unique identifier” that serves to identify which data record we wish to find in the database (e.g. a product code or customer ID).  This unique identifier must exist in the database, otherwise VLOOKUP returns us an error.  In this article, we will examine a way of using VLOOKUP where the identifier doesn’t need to exist in the database at all.  It’s almost as if VLOOKUP can adopt a “near enough is good enough” approach to returning the data we’re looking for.  In certain circumstances, this is exactly what we need. We will illustrate this article with a real-world example – that of calculating the commissions that are generated on a set of sales figures.  We will start with a very simple scenario, and then progressively make it more complex, until the only rational solution to the problem is to use VLOOKUP.  The initial scenario in our fictitious company works like this:  If a salesperson creates more than $30,000 worth of sales in a given year, the commission they earn on those sales is 30%.  Otherwise their commission is only 20%.  So far this is a pretty simple worksheet: To use this worksheet, the salesperson enters their sales figures in cell B1, and the formula in cell B2 calculates the correct commission rate they are entitled to receive, which is used in cell B3 to calculate the total commission that the salesperson is owed (which is a simple multiplication of B1 and B2). The cell B2 contains the only interesting part of this worksheet – the formula for deciding which commission rate to use: the one below the threshold of $30,000, or the one above the threshold.  This formula makes use of the Excel function called IF.  For those readers that are not familiar with IF, it works like this: IF(condition,value if true,value if false) Where the condition is an expression that evaluates to either true or false.  In the example above, the condition is the expression B1<B5, which can be read as “Is B1 less than B5?”, or, put another way, “Are the total sales less than the threshold”.  If the answer to this question is “yes” (true), then we use the value if true parameter of the function, namely B6 in this case – the commission rate if the sales total was below the threshold.  If the answer to the question is “no” (false), then we use the value if false parameter of the function, namely B7 in this case – the commission rate if the sales total was above the threshold. As you can see, using a sales total of $20,000 gives us a commission rate of 20% in cell B2.  If we enter a value of $40,000, we get a different commission rate: So our spreadsheet is working. Let’s make it more complex.  Let’s introduce a second threshold:  If the salesperson earns more than $40,000, then their commission rate increases to 40%: Easy enough to understand in the real world, but in cell B2 our formula is getting more complex.  If you look closely at the formula, you’ll see that the third parameter of the original IF function (the value if false) is now an entire IF function in its own right.  This is called a nested function (a function within a function).  It’s perfectly valid in Excel (it even works!), but it’s harder to read and understand. We’re not going to go into the nuts and bolts of how and why this works, nor will we examine the nuances of nested functions.  This is a tutorial on VLOOKUP, not on Excel in general. Anyway, it gets worse!  What about when we decide that if they earn more than $50,000 then they’re entitled to 50% commission, and if they earn more than $60,000 then they’re entitled to 60% commission? Now the formula in cell B2, while correct, has become virtually unreadable.  No-one should have to write formulae where the functions are nested four levels deep!  Surely there must be a simpler way? There certainly is.  VLOOKUP to the rescue! Let’s redesign the worksheet a bit.  We’ll keep all the same figures, but organize it in a new way, a more tabular way: Take a moment and verify for yourself that the new Rate Table works exactly the same as the series of thresholds above. Conceptually, what we’re about to do is use VLOOKUP to look up the salesperson’s sales total (from B1) in the rate table and return to us the corresponding commission rate.  Note that the salesperson may have indeed created sales that are not one of the five values in the rate table ($0, $30,000, $40,000, $50,000 or $60,000).  They may have created sales of $34,988.  It’s important to note that $34,988 does not appear in the rate table.  Let’s see if VLOOKUP can solve our problem anyway… We select cell B2 (the location we want to put our formula), and then insert the VLOOKUP function from the Formulas tab: The Function Arguments box for VLOOKUP appears.  We fill in the arguments (parameters) one by one, starting with the Lookup_value, which is, in this case, the sales total from cell B1.  We place the cursor in the Lookup_value field and then click once on cell B1: Next we need to specify to VLOOKUP what table to lookup this data in.  In this example, it’s the rate table, of course.  We place the cursor in the Table_array field, and then highlight the entire rate table – excluding the headings: Next we must specify which column in the table contains the information we want our formula to return to us.  In this case we want the commission rate, which is found in the second column in the table, so we therefore enter a 2 into the Col_index_num field: Finally we enter a value in the Range_lookup field. Important:  It is the use of this field that differentiates the two ways of using VLOOKUP.  To use VLOOKUP with a database, this final parameter, Range_lookup, must always be set to FALSE, but with this other use of VLOOKUP, we must either leave it blank or enter a value of TRUE.  When using VLOOKUP, it is vital that you make the correct choice for this final parameter. To be explicit, we will enter a value of true in the Range_lookup field.  It would also be fine to leave it blank, as this is the default value: We have completed all the parameters.  We now click the OK button, and Excel builds our VLOOKUP formula for us: If we experiment with a few different sales total amounts, we can satisfy ourselves that the formula is working. Conclusion In the “database” version of VLOOKUP, where the Range_lookup parameter is FALSE, the value passed in the first parameter (Lookup_value) must be present in the database.  In other words, we’re looking for an exact match. But in this other use of VLOOKUP, we are not necessarily looking for an exact match.  In this case, “near enough is good enough”.  But what do we mean by “near enough”?  Let’s use an example:  When searching for a commission rate on a sales total of $34,988, our VLOOKUP formula will return us a value of 30%, which is the correct answer.  Why did it choose the row in the table containing 30% ?  What, in fact, does “near enough” mean in this case?  Let’s be precise: When Range_lookup is set to TRUE (or omitted), VLOOKUP will look in column 1 and match the highest value that is not greater than the Lookup_value parameter. It’s also important to note that for this system to work, the table must be sorted in ascending order on column 1! If you would like to practice with VLOOKUP, the sample file illustrated in this article can be downloaded from here. Similar Articles Productive Geek Tips Using VLOOKUP in ExcelImport Microsoft Access Data Into ExcelImport an Access Database into ExcelCopy a Group of Cells in Excel 2007 to the Clipboard as an ImageShare Access Data with Excel in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • JavaCV IplImage to LWJGL Texture

    - by rendrag
    As a side project I've been attempting to make a dynamic display (for example a screen within a game) that shows images from my webcam. I've been messing around with JavaCV and LWJGL for the past few months and have a basic understanding of how they both work. I found this after scouring google, but I get an error that the ByteBuffer isn't big enough. IplImage img = cam.getFrame(); ByteBuffer buffer = img.asByteBuffer(); int textureID = glGenTextures(); //Generate texture ID glBindTexture(GL_TEXTURE_2D, textureID); //Bind texture ID //I don't know how much of the following is necessary //Setup wrap mode glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE); //Setup texture scaling filtering glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //Send texture data to OpenGL - this is the line that actually does stuff and that OpenGL has a problem with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL12.GL_BGR, GL_UNSIGNED_BYTE, buffer); That last line throws this- Exception in thread "Thread-0" java.lang.IllegalArgumentException: Number of remaining buffer elements is 144, must be at least 921600. Because at most 921600 elements can be returned, a buffer with at least 921600 elements is required, regardless of actual returned element count at org.lwjgl.BufferChecks.throwBufferSizeException(BufferChecks.java:162) at org.lwjgl.BufferChecks.checkBufferSize(BufferChecks.java:189) at org.lwjgl.BufferChecks.checkBuffer(BufferChecks.java:230) at org.lwjgl.opengl.GL11.glTexImage2D(GL11.java:2845) at tests.TextureTest.getTexture(TextureTest.java:78) at tests.TextureTest.update(TextureTest.java:43) at lib.game.AbstractGame$1.run(AbstractGame.java:52) at java.lang.Thread.run(Thread.java:679)

    Read the article

  • Surface RT: To Be Or Not To Be (Part 1)

    - by smehaffie
    So the Surface RT has been out for 9 months and Microsoft just declared a $900 million dollar write-down. So how did this happen and what does it mean for Microsoft’s efforts to break into the tablet market? I have been thinking a lot about most of the information below since the Surface product line was released. If you are looking for a “Microsoft Is Dead” story, then don’t read any further. But if you want an honest look at what I think led Microsoft to this point and what I think can be done to make Surface RT devices better, then please continue reading. What Led Microsoft To The $900 Million Write-Down Surface Unveiling:Microsoft totally missed the boat when they unveiled the Surface product line on June 18th, 2012. Microsoft should’ve been ready to post the specifications of both devices that night. Microsoft should’ve had a site up and running right after the event so people could pre-order the devices. This would have given them a good idea what the interest was in each device.  They could also have used this data to make a better estimate for the number of units to to have available for the launch and beyond.  They also lost out on taking advantage of the excitement generated by the Surface RT and Surface Pro announcement. They could have thrown in a free touch keyboard to anyone who pre-ordered. The advertising should have started right after the announcement and gotten bigger as launch day approached. Push for as many pre-order as possible and build excitement for the launch. Actual Launch (Surface RT): By this time all excitement was gone from the initial announcement, except for the Micorsoft faithful. Microsoft should have been ready to sell the Surface in as many markets as possible at launch. The limited market release was a real letdown for a lot of people.  A limited release right after the initial announce is understandable, but not at the official launch of the product. Microsoft overpriced the device and now they are lowering it to what it should have been to start with. The $349 price is within the range I suggested it should be at before pricing was announced. (Surface Tablets: The Price Must Be Right). Limited ordering options online was also a killer. User should have been able to buy the base unit of each device and then add on whatever keyboard they wanted to (this applies more to the Surface Pro).  There should have also been a place where users could order any additional add-ins that they wanted to buy (covers, extra power supplies, etc.) Marketing was better and the dancing “Click In” commercial was cool, but the ads comparing the iPad with Siri should have been on the air from day one of the announcement (or at least the launch).  Consumers want to know why you tablet is better, not just that is has a clickable keyboard and built-in kickstand. They could have also compared it to some of the other mid-range tablets if they had not overprices it to begin with. Stock Applications (Mail, People, Calendar, Music, Video, Reader and IE): This is where Microsoft really blew it. They had all the time in the world to make these applications the best of breed and instead we got applications that seemed thrown together.  Some updates have made these application better, but they are all still lacking in features that should have been there from day one. This did not help to enhance a new users experience any. ** I will admit that the applications that were data driven were first class citizen’s and that makes it even more perplexing why MS could knock it out of the park with the Weather, Travel, Finance, Bing, etc.) and fail so miserably on the core applications users would use the most on a tablet. Desktop on Tablet: The desktop just is so out of place on the tablet  I understand it was needed for Office but think it would have been better to not have the desktop in Windows RT, but instead open up the Office applications in full screen mode, in a desktop shell (same goes for  IE11).That way the user wouldn’t realize they are leaving Metro and going to the desktop. The other option would have been to just not include Office on Windows RT devices. Instead they could have made awesome Widows Store Apps for Word, Excel, OneNote and PowerPoint. In addition, they could have made the stock Mail, People, and Calendar applications contain all the functions that Outlook gives desktop users. Having some of the settings in desktop mode and others under “Change PC Settings” made Windows RT seemed unfinished and rushed to market. What Can Be Done To Make Windows RT Based Tablets Better (At least in my opinion) Either eliminate the desktop all together from Windows RT or at least make the user experience better by hiding the fact the user is running Office/IE in the desktop. Personally I ‘d like them to totally get rid of it and just make awesome Windows Store Application version of Word, Excel PowerPoint & OneNote.  This might also make the OS smaller and give the user more available disk space. I doubt there will ever be a Windows Store App versions of Office, but I still think it is a good idea. Make is so users can easily direct their documents, picture, videos and music to their extra storage and can access these files from the standard libraries.  A user should not have to create a VM on their microSD card or create symbolic links to get this to work properly. Most consumers would not be able to do this. Then users get frustrated when they run out or room on their main storage because nothing is automatically save to their microSD card when saved to libraries.  This is a major bug that needs to be fixed, otherwise Microsoft’s selling point of having a microSD slot is worthless. Allows users to uninstall and re-install any of the Office product that come with the Surface. That way people can free up storage space by uninstalling the Office applications they do not need. Everyone’s needs are different, so make the options flexible. Don’t take up storage space for applications the user will not use. Make the Core applications the “Cream of the Crop” Windows App Store applications. The should set the bar for all other Store applications. Improve performance as much as possible, if it seems to be sluggish on a tablet consumer will not buy it. They need to price the next line of Surface product very aggressive to undercut not only iPad but also Android low end tablets (Nook, Kindle Fire, and Nexus, etc.) Give developers incentives to write quality applications for the devices. Don’t reward developers for cranking out cookie cutter, low quality applications. I’d even suggest Microsoft consider implementing some new store certification guideline to stop these type of applications being published. Allow users to easily move the recover disk “partition between their microSD card and main storage. My Predictions for the Surface RT and Windows RT I honestly think even with all the missteps MS has made since the announcement  about the Surface product line, that they are on the right path. I was excited the Surface tablets when they were announced, and I still am. The truth be told, Windows 8 on a tablet (aka: Windows RT) is better than both iOS and Android. My nephew who is an Apple fan boy told me after he saw and used Windows 8 (he got the beta running on his iPad), that Windows 8 kicked Apples butt as a tablet OS. So there is hope for all Windows RT based tablets. I agree with my nephew and that is why whenever anyone asks me about my Surface, I love showing it off and recommend it. The 6 keys to gaining market share in the tablet market are; Aggressive pricing by both Microsoft and their OEM’s Good quality devices put out by Microsoft and their OEM’s (there are some out there, but not enough) Marketing, Marketing, Marketing from both Microsoft and their OEM’s (Need more ads showing why windows based tablets are better than iPads and Android tablets) Getting Widows tablets in retails stores all over, and giving sales people incentive to sell them. Consumers like to try electronics out before they buy them, and most will listen to what the sales person suggest. Microsoft needs sales people in retail stores directing people to buy windows based tablets over iPads and Android tablets. I think the Microsoft Stores within Best Buy is a good start, but they also need to get prominent displays in Walmart, Target, etc.. Release a smaller form factor Surface, Hopefully the 8”-10” next generation Surface is not a rumor. Make “Surface” the brand name for all Microsoft tablets and hybrid devices that they come out with. They cannot change the name with each new release.  Make Surface synonymous with quality, the same way that iPad  is for Apple. Well, that is my 2 cents on the subject. Let me know your thoughts by leaving a comment below. Soon to follow will be my thought on the Surface Pro, so keep an eye out for it. var addthis_pub="smehaffie"; var addthis_options="email, print, digg, slashdot, delicious, twitter, live, myspace, facebook, google, stumbleupon, newsvine";

    Read the article

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • Thoughts on my new template language?

    - by Ralph
    Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed as the closing brace. If no brace is used, they will be closed after taking on element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? Do you like the attribute syntax? (round brackets) I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • Progressive Enhancement vs. Single Page Apps

    - by SeanPlusPlus
    I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?

    Read the article

  • Time Travel 101

    - by Jim Duffy
    I’m thinking maybe I should have used Time Crunching 101 as the title instead… or maybe ‘Duh Duffy, where have you been? Everyone knows that!” Ok, so maybe you won’t actually learn how to travel through time from this post but you will learn how to cram more learning into one day. We all know you can’t make it to every conference, every presentation, or every training session. The good news is that many of those events make their content available to either watch online or to download for off-line viewing. The problem is who has time to sit and watch all those presentations in real time? Not me. One trick I use is to view the content at an increased play rate. Why listen to a boring speaker like me drone on for the entire length of the session when you can listen to them drone on in almost half the time. :-) I view nearly all off-line content with Windows Media Player though I’m sure you can implement this idea with any media playback software. The idea is changing the playback speed you view the content at. With Windows Media Player you can change the play speed from the menu system. Once you have the Play Speed Setting panel open you can specify the playback speed. Depending on the content and the presenter I can typically listen between 1.6 and 2.0 times normal speed. My Florida edumacation taught me that playing the video back at twice the speed means I’ll listen to it twice as fast and that means I can view it in almost 1/2 the time.  Too bad it won’t make me twice as smart. :-) I hope this helps you speed your way through more training content. Have a day. :-|

    Read the article

  • What, if anything, to do about bow-shaped burndowns?

    - by Karl Bielefeldt
    I've started to notice a recurring pattern to our team's burndown charts, which I call a "bowstring" pattern. The ideal line is the "string" and the actual line starts out relatively flat, then curves down to meet the target like a bow. My theory on why they look like this is that toward the beginning of the story, we are doing a lot of debugging or exploratory work that is difficult to estimate remaining work for. Sometimes it even goes up a little as we discover a task is more difficult once we get into it. Then we get into implementation and test which is more predictable, hence the curving down graph. Note I'm not talking about a big scale like BDUF, just the natural short-term constraint that you have to find the bug before you can fix it, coupled with the fact that stories are most likely to start toward the beginning of a two-week iteration. Is this a common occurrence among scrum teams? Do people see it as a problem? If so, what is the root cause and some techniques to deal with it?

    Read the article

  • PHP Aspect Oriented Design

    - by Devin Dixon
    This is a continuation of this Code Review question. What was taken away from that post, and other aspect oriented design is it is hard to debug. To counter that, I implemented the ability to turn tracing of the design patterns on. Turning trace on works like: //This can be added anywhere in the code Run::setAdapterTrace(true); Run::setFilterTrace(true); Run::setObserverTrace(true); //Execute the functon echo Run::goForARun(8); In the actual log with the trace turned on, it outputs like so: adapter 2012-02-12 21:46:19 {"type":"closure","object":"static","call_class":"\/public_html\/examples\/design\/ClosureDesigns.php","class":"Run","method":"goForARun","call_method":"goForARun","trace":"Run::goForARun","start_line":68,"end_line":70} filter 2012-02-12 22:05:15 {"type":"closure","event":"return","object":"static","class":"run_filter","method":"\/home\/prodigyview\/public_html\/examples\/design\/ClosureDesigns.php","trace":"Run::goForARun","start_line":51,"end_line":58} observer 2012-02-12 22:05:15 {"type":"closure","object":"static","class":"run_observer","method":"\/home\/prodigyview\/public_html\/public\/examples\/design\/ClosureDesigns.php","trace":"Run::goForARun","start_line":61,"end_line":63} When the information is broken down, the data translates to: Called by an adapter or filter or observer The function called was a closure The location of the closure Class:method the adapter was implemented on The Trace of where the method was called from Start Line and End Line The code has been proven to work in production environments and features various examples of to implement, so the proof of concept is there. It is not DI and accomplishes things that DI cannot. I wouldn't call the code boilerplate but I would call it bloated. In summary, the weaknesses are bloated code and a learning curve in exchange for aspect oriented functionality. Beyond the normal fear of something new and different, what are other weakness in this implementation of aspect oriented design, if any? PS: More examples of AOP here: https://github.com/ProdigyView/ProdigyView/tree/master/examples/design

    Read the article

  • disable all hotkeys with dconf-editor

    - by Gijs
    I'm building a deb package that will create a kiosk user account. When you login to this account, the browser automatically starts and navigates to a pre-given website. Also all the hotkeys should be disabled except for one you defined. Now I'm at the part that the browser starts and my disable script is excecuted on logon but for exaple, I can still close the browser with Alt + F4. And when i check the dconf-editor for the hotkeys, for every single one the bind is deleted, but mine for close isn't added. Like you can see in the printscreen. I disabled all these keys with this code, one line for evere hotkey gsettings set org.gnome.desktop.wm.keybindings begin-resize [] So this seams to work, but at the bottom of my disable script this line should be executed. gsettings set org.gnome.desktop.wm.keybindings close ['<Alt>b'] Does anyone know why i'm able to unbind all these hotkeys in my dconf-editor but they are able to still do their job? And when it is possible to unbind them, why cant I bind mine? I searched the web for a solution but couldn't find one fitting my needs, I hope some of you know the answer to this. Regards Gijs

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • Misunderstanding Scope in JavaScript?

    - by Jeff
    I've seen a few other developers talk about binding scope in JavaScript but it has always seemed to me like this is an inaccurate phrase. The Function.prototype.call and Function.prototype.apply don't pass scope around between two methods; they change the caller of the function - two very different things. For example: function outer() { var item = { foo: 'foo' }; var bar = 'bar'; inner.apply(item, null); } function inner() { console.log(this.foo); //foo console.log(bar); //ReferenceError: bar is not defined } If the scope of outer was really passed into inner, I would expect that inner would be able to access bar, but it can't. bar was in scope in outer and it is out of scope in inner. Hence, the scope wasn't passed. Even the Mozilla docs don't mention anything about passing scope: Calls a function with a given this value and arguments provided as an array. Am I misunderstanding scope or specifically scope as it applies to JavaScript? Or is it these other developers that are misunderstanding it?

    Read the article

  • Can I use Ubuntu One to sync data fiies between two remote computers

    - by Sleepy John
    I've got two computers, both running Ubuntu with files in their home folders sync'd in to Ubuntu One. I'd like to know if it's possible to make Ubuntu One automatically download data changes that have been uploaded automatically to Ubuntu One from one computer to the equivalent data file in the other. Clarifying a bit further, I've installed Red Notebook in both computers and so they each have their own /.rednotebook/data folder containing a series of .txt files corresponding to the monthly entries in each of them. These are sync'd to upload any changes to those .txt files to Ubuntu One. My question is can I, and if so how, do I make Ubuntu One automatically download and replace those .txt files in the other computer after they've been updated and uploaded from the first computer? I did labouriously manage to download all those text files which had been uploaded from the first computer, from Ubuntu One one-by-one to the second computer, but what I want to do is automate this process and that's where I'm stuck. I'm aware that things could get a bit complicated if both my computers were on-line at the same time and both were simultaneously making different Red Notebook entries, so that's not the scenario I'm trying to cover. All I want to achieve is that whatever updates to the files have been uploaded by one computer, will automatically be downloaded to the same-named files in the other computer as soon as that second computer appears on line and detects that Ubuntu One has matching but more recent sync'd files than the ones it's holding.

    Read the article

  • What happened to Alan Cooper's Unified File Model?

    - by PAUL Mansour
    For a long time Alan Cooper (in the 3 versions of his book "About Face") has been promoting a "unified file model" to, among other things, dispense with what he calls the most idiotic message box ever invented - the one the pops up when hit the close button on an app or form saying "Do you want to discard your changes?" I like the idea and his arguments, but also have the knee-jerk reaction against it that most seasoned programmers and users have. While Cooper's book seems quite popular and respected, there is remarkably little discussion of this particular issue on the Web that I can find. Petter Hesselberg, the author of "Programming Industrial Strength Windows" mentions it but that seems about it. I have an opportunity to implement this in the (desktop) project I am working on, but face resistance by customers and co-workers, who are of course familiar with the MS Word and Excel way of doing things. I'm in a position to override their objections, but am not sure if I should. My questions are: Are there any good discussions of this that I have failed to find? Is anyone doing this in their apps? Is it a good idea that it is unfortunately not practical to implement until, say, Microsoft does it?

    Read the article

  • External microphone not working

    - by haireefairee
    gnome-volume-control does not recognise external hardware. My headphones work nonetheless, but an external microphone does not. External microphones used to work, but at times were temperamental - I would have to login or logout with or without microphone plugged in. I am running Ubuntu 10.04 LTS (Lucid Lynx) on an mSi U100 wind notebook with one Intel soundcard and trying to use a jack microphone which has worked previously. USB microphones have also been problematic. I have done the basics: Installed upgrades. Checked nothing is muted. Looked for the device on gnome-volume-control. Tried using a different microphone that works on a friends computer. Tested my microphone works when using a different computer. Checked my soundcard can be seen (cat /proc/asound/cards). I have done more complicated things: I have tried playing around with settings in alsamixer. Nothing is muted. I can adjust "mic" and "internal mic" regardless of whether an external microphone is plugged in. I have the choice of input source from "mic", "front mic", "line" and "CD". I've played around changing this and it hasn't helped. I only have one CAPTURE option. In gnome-sound-recorder I have the choice of line, microphone 1 and microphone 2. I have played around changing this option. None of these pick up sound from the external microphone. Microphone 2 is the microphone on my laptop which is bad quality. In gnome-sound-recorder I have the choice of different profiles, and changing this has not helped either. I have looked at gstreamer-properties but none of that seemed helpful. I don't know if there a way to check if these external devices are being picked up. I would like to make an external microphone work. Please help!

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >