Search Results

Search found 801 results on 33 pages for 'louis van tonder'.

Page 5/33 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Looking for mass cropping software

    - by Bart van Heukelom
    I'm looking for a tool than runs on Ubuntu that can let me: Open an image in a folder which has thousands Crop and rotate it Save as a copy, automatically named (not manually), with one click. Preferably with something in the name that I can later use to filter these cropped copies in Nautilus (unless it saves in another directory, that'd be even better). Move to next image and repeat Does it exist?

    Read the article

  • Are you ready for SharePoint 2010?

    - by Michael Van Cleave
    With SharePoint's next release on the horizon (May 12th) many of my clients and colleagues are starting to ramp up for the upcoming tidal wave of functionality. Microsoft has been doing a terrific job of getting as much information out in the public lime light as possible over the last few months and I think that will definitely pay off with regards to acceptance of the new version of SharePoint. However, there are still some aspects of the new platform that are a little murky. Aspects such as: "Should we upgrade?" "Will my current installation upgrade without issues?" "What benefits will I see by upgrading?" "What are the best practices for upgrading or best practice in general relating to 2010?" "How should we plan to deploy SharePoint 2010 in our organization?" There is a ton of information out there, but how do you go about getting some of these questions answered? Well, I am glad you asked. (J) ShareSquared will be delivering a FREE SharePoint 2010 Readiness Webinar that will cover Preparation, Strategies, and Best Practices for the upcoming version of SharePoint. The webinar will be presented by 2 of ShareSquared's outstanding SharePoint MVP's; Gary Lapointe and Paul Stork. As all those T.V. commercials say… "Space is limited, so sign up now!" Just kidding, well kind of but not really. I am sure that the signup will be huge and space is really limited so the sooner you sign up the better. I would hate for any of you to miss out. If you have any questions please don't hesitate to shoot me a e-mail through my blog or contact ShareSquared directly. See you at the webinar! Michael

    Read the article

  • Better drivers for SiS 650/740 integrated video?

    - by Bart van Heukelom
    I installed Xubuntu 10.10 on an old box today and the graphical performance is horrid. According to lspci, the video card is this: 01:00.0 VGA compatible controller: Silicon Integrated Systems [SiS] 65x/M650/740 PCI/AGP VGA Display Adapter (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 8081 Flags: 66MHz, medium devsel, IRQ 11 BIST result: 00 Memory at f0000000 (32-bit, prefetchable) [size=128M] Memory at e7800000 (32-bit, non-prefetchable) [size=128K] I/O ports at d800 [size=128] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel modules: sisfb Is there a way to make it faster? Alternative drivers? The additional drivers tool shows nothing. I'm specifically interested in improving Java's Java2D rendering speed, because I'll be running a "stat screen" written in that language on it.

    Read the article

  • PublishingWeb.ExcludeFromNavigation Missing

    - by Michael Van Cleave
    So recently I have had to make the transition from the SharePoint 2007 codebase to SharePoint 2010 codebase. Needless to say there hasn't been much difficulty in the changeover. However in a set of code that I was playing around with and transitioning I did find one change that might cause some pain to others out there that have been programming against the PublishingWeb object in the Microsoft.SharePoint.Publishing namespace. The 2007 snippet of code that work just fine in 2007 looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {      PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.ExcludeFromNavigation(true, itemID);     publishingWeb.Update(); } The 2010 update to the code looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {     PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.Navigation.ExcludeFromNavigation(true, itemID); //--Had to reference the Navigation object.     publishingWeb.Update(); } The purpose of the code is to keep a page from showing up on the global or current navigation when it is added to the pages library. However you see that the update to the 2010 codebase actually makes more "object" sense. It specifies that you are technically affecting the Navigation of the Publishing Web, instead of the Publishing Web itself. I know that this isn't a difficult problem to fix, but I thought it would be something quick to help out the general public. Michael

    Read the article

  • How to set up a one-man research in the difference between BDD and Waterfall?

    - by Martijn van der Maas
    Earlier, I asked a question about how to measure the quality of a project. The outcome of that question was that the quality of the project can be divided into two parts: Internal quality (code quality, measurable by code quality metrics) External quality (Acceptance test, how well the software meets the requirements) So based on that, I want to set up some research and validate the outcome of the project. The problem is, I will conduct this research on my own, so it's not possible to run the project once in BDD style and the other one in waterfall by myself. It's also not possible to compare BDD and waterfall projects on a larger scale, due to the fact that there are not enough BDD projects that can be measured because of the age of BDD. So, my question is: did anybody face this problem? How could I execute my experiment in such a way that it is of scientific value?

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

  • Is HTML5/WebGL performance unreliable on low-end Android tablets and phones?

    - by Boris van Schooten
    I've developed a couple of WebGL games, and am trying them out on Android. I found that they run very slowly on my tablet, however. For example, a game with 10 sprites or so runs as 5fps. I tried Chrome and CocoonJS, but they are comparably slow. I also tried other games, and even games with only 5 or so moving sprites are this slow. This seems inconsistent with reports from others, such as this benchmark. Typically, when people talk about HTML5 game performance, they mention well-known and higher-end phones and tables. While my 7" tablet is cheap (I believe it's a relabeled Allwinner tablet, apparently with the Mali 400 GPU), I found it generally has a good gaming performance. All the games I tried run smoothly. I also developed an OpenGL ES 2 demo with 200 shaded 3D objects, and it ran at 50fps. My suspicion is that many low-end and white-label devices may have unacceptable HTML5/WebGL support, which means there may be a large section of gamers you will not reach when you choose this as your platform. I've heard rumors about inconsistent performance of HTML5 and WebGL on different devices, but no clear picture emerges. I would like to hear if any of you have had similar experiences with HTML5 or WebGL, or whether I can find information about the percentage of devices I can expect to have decent performance.

    Read the article

  • What expectation should I have of South African web development rates / duration? [closed]

    - by Warren van Rooyen
    I am a developer but only intend on doing the front-end work for getting Reddit-like upvote / downvote functionality going on an upcoming site I'm building. I have never had to contract a developer for back-end work to implement code for me so I am quite in the dark on how much I should expect to pay and how long it could take to get the site going. I could be taken for a ride as the developer could distort the time it would take at a seemingly regular rate (hourly/day) or could otherwise distort their rate. Please could you give me help on this. I know you need some guidance on the nature of the site so here it is. I have a Reddit type template with CSS and PHP included. I then downloaded Pligg code that's intended to do the job of the Reddit upvote downvote functionality. How long would a developer roughly need to unite the theme and front end with the back-end functionality? I do understand it's not a lot of info but I'm sure you're experienced enough to have an instinct for the size of the project. Also, should I work on an hourly/day rate/ project payment agreement?

    Read the article

  • Why can we recognize game engines?

    - by Bart van Heukelom
    About many games you can say "oh that's the Unreal engine for sure", "this was made by upgrading GTA 4", etc. We can often recognize the engine used for a game just by looking at its graphics (disregarding menus and such). I'm wondering, why is this? All game engines use the same 3D rendering technology that we all use, and the different games usually have a distinct art style, so what's left to recognize?

    Read the article

  • How to move Ubuntu to an SSD

    - by Bart van Heukelom
    My current situation is: One hard disk Dual boot Ubuntu 11.04 and Windows 7. Partitions: 100MB Windows System thingy 144GB Main Windows 160GB Ubuntu 4GB Swap 12GB System Restore stuff Now I want to install an 80GB SSD and move Ubuntu to it. AFAIK I need to: Shrink the 160GB Ubuntu partition to 80GB Copy it over to the SSD Change fstab to mount the SSD as / How do I do the second? And what do I need to do about Grub?

    Read the article

  • Getting Wacom Bamboo Pen + Touch pressure sensitivy in GIMP

    - by Bart van Heukelom
    I've installed my Wacom Bamboo Pen + Touch (CTH-460) in Ubuntu (at least on one system, not another) and the pen works well for controlling the mouse cursor. However, I can't get pressure sensitivity to work in GIMP. I have 4 extra devices in the input devices settings screen now, two of which are the pen and eraser. I've set them both from Disabled to Screen, and left the default settings intact. However, after saving I still don't see any pressure options in the brush tool's options. I've also tried setting the mode to Window instead, but it makes no difference regarding pressure sensitivity. There are no other modes. The pressure works out-of-the-box in Blender (grease pen) so it must be something in GIMP. What can be wrong? Why don't the options appear? How can I debug this?

    Read the article

  • If you had three months to learn one relatively new technology, which one would you choose?

    - by Ivo van der Wijk
    This question was taken from CodingHorror. On my list would be (and some actually are): Android Development (and possibly iPhone development) Go language and its concurrency NoSQL, specifically CouchDB RCTK, which happens to be my own idea / project (but all ideas have been thought or already, what matters is my implementation) But I don't think I'm being cutting-edge/thinking-outside-the-box here. What's on your list? Please don't restrict yourself to the list above - that's my list. I'm interested in hearing what others find interesting new technology.

    Read the article

  • Remove Duplicate Second Unity Launcher on Dual Screen

    - by Eugene van der Merwe
    See attached image. On my dual screen display I have a Unity Launcher on the left hand screen and also on the right hand screen. Both work perfectly fine. I don't want two Unity Launchers. Every time I move my house to the right hand side it gets slowed down over the right launcher hampering my productivity. I have an Nvidia card with the Nvidia driver and I am using TwinView. Could somebody please tell me how to remove this extra duplicated launcher?

    Read the article

  • Object oriented EDI handling in PHP

    - by Robert van der Linde
    I'm currently starting a new sub project where I will: Retrieve the order information from our mainframe Save the order information to our web-apps' database Send the order as EDI (either D01b or D93a) Receive the order response, despatch advice and invoice messages Do all kinds of fun things with the resulting datasets. However I am struggling with my initial class designs. The order information will be retrieved from the mainframe which will result in a "AOrder" class, this isn't a problem, I am not sure about how to mold this local object into an EDI string. Should I create EDIOrder/EDIOrderResponse/etc classes with matching decorators (EDIOrderD01BDecorator, EDIOrderD93ADecorator)? Do I need builder objects or can I do: // $myOrder is instance of AOrder $myOrder->toEDIOrder(); $decorator = new EDIOrderD01BDecorator($myOrder); $edi = $decorator->getEDIString(); And it'll have to work the other way around as well. Is the following code a good way of handling this problem or should I go about this differently? $ediString = $myEDIMessageBroker->fetch(); $ediOrderResponse = EDIOrderResponse::fromString($ediString); I'm just not so sure about how I should go about designing the classes and interactions between them. Thanks for reading and helping.

    Read the article

  • How to find the window size in XNA

    - by Nick Van Hoogenstyn
    I just wanted to know if there was a way to find out the size of the window in XNA. I don't want to set it to a specific size; I would like to know what dimensions it currently displays as automatically. Is there a way to find this information out? I realize I probably should have found this information out (or set it myself manually) before working on the game, but I'm a novice and am now hoping to work within the dimensions I have already become invested in. Thanks!

    Read the article

  • Permanently mounting Windows' NTFS partition, fully enabled

    - by Bart van Heukelom
    I'm transforming a Windows 7 PC into a dual boot system with Ubuntu 10.10. Following other questions on this site, I've mounted my Windows drive by adding this to fstab UUID=blabla /windows ntfs users,defaults,umask=000 0 0 It appears to work well, I can read and write, but it appears to be a bit crippled still. When I tried to update an SVN working copy with RabbitVCS, it complained that it couldn't write to a temporary file inside the working copy, even though the permissions are all on 0777 inside /windows (by default, I haven't done that manually). It even corrupted that working copy :( It works when I use the command line SVN client with sudo, but that's hardly user friendly.

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

  • Liskov principle: violation by type-hinting

    - by Elias Van Ootegem
    According to the Liskov principle, a construction like the one below is invalid, as it strengthens a pre-condition. I know the example is pointless/nonsense, but when I last asked a question like this, and used a more elaborate code sample, it seemed to distract people too much from the actual question. //Data models abstract class Argument { protected $value = null; public function getValue() { return $this->value; } abstract public function setValue($val); } class Numeric extends Argument { public function setValue($val) { $this->value = $val + 0;//coerce to number return $this; } } //used here: abstract class Output { public function printValue(Argument $arg) { echo $this->format($arg); return $this; } abstract public function format(Argument $arg); } class OutputNumeric extends Output { public function format(Numeric $arg)//<-- VIOLATION! { $format = is_float($arg->getValue()) ? '%.3f' : '%d'; return sprintf($format, $arg->getValue()); } } My question is this: Why would this kind of "violation" be considered harmful? So much so that some languages, like the one I used in this example (PHP), don't even allow this? I'm not allowed to strengthen the type-hint of an abstract method but, by overriding the printValue method, I am allowed to write: class OutputNumeric extends Output { final public function printValue(Numeric $arg) { echo $this->format($arg); } public function format(Argument $arg) { $format = is_float($arg->getValue()) ? '%.3f' : '%d'; return sprintf($format, $arg->getValue()); } } But this would imply repeating myself for each and every child of Output, and makes my objects harder to reuse. I understand why the Liskov principle exists, don't get me wrong, but I find it somewhat difficult to fathom why the signature of an abstract method in an abstract class has to be adhered to so much stricter than a non-abstract method. Could someone explain to me why I'm not allowed to hind at a child class, in a child class? The way I see it, the child class OutputNumeric is a specific use-case of Output, and thus might need a specific instance of Argument, namely Numeric. Is it really so wrong of me to write code like this?

    Read the article

  • Can I use Google Search to determine if my website contains original or copied content?

    - by Bas van Vught
    I have a few websites from customers that have (partially) the same content as other websites. I plan on rewriting all content that is not original, but how do I know if my websites have original content, or content that's been copied from another website? My customers say all the content is original, but I have my doubts to be honest. They often let other people who don't work there anymore write content for the sites. What I did so far is copy a line from my website that can be found in other websites as well and pasted it into Google Search. If my website is the first link, would it be considered the original source?

    Read the article

  • Using XSLT for messaging instead of marshalling/unmarshalling Java message objects

    - by Joost van Stuijvenberg
    So far I have been using either handmade or generated (e.g. JAXB) Java objects as 'carriers' for messages in message processing software such as protocol converters. This often leads to tedious programming, such as copying/converting data from one system's message object to an instance of another's system message object. And it sure brings in lots of Java code with getters and setters for each message attribute, validation code, etc. I was wondering whether it would be a good idea to convert one system's XML message into another system's format - or even convert requests into responses from the same system - using XSLT. This would mean I would no longer have to unmarshall XML streams to Java objects, copy/convert data using Java and marshall the resulting message object to another XML stream. Since each message may actually have a purpose I would 'link' the message (and the payload it contains in its properties or XML elements/attributes) to EXSLT functions. This would change my design approach from an imperative to a declarative style. Has anyone done this before and, if so, what are your experiences? Does the reduced amount of Java 'boiler plate' code weigh up to the increased complexity of (E)XSLT?

    Read the article

  • htaccess rewrite doesn't work

    - by Raimond van Mouche
    I'm trying to redirect url's in my /joomla/ folder containing "rsform" to the same url but except for /joomla/ /formulieren/. However my tried .htaccess rewrite doesn't work. I tried: RewriteEngine on RewriteCond %{REQUEST_URI} rsform RewriteRule ^(.+)$ http://watervriendengeleen.nl/joomla/ [L,R=301] And other URL related rewrites like Redirect /joomla/index.php?option=com_rsform&formId=12&Itemid=99999 http://sitename.com/formulieren/index.php?option=com_rsform&formId=12&Itemid=99999 which didn't work either. Any thoughts?

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Must double-tap Windows key to open Dash

    - by Bart van Heukelom
    I'm experiencing some strange behaviour of the Unity Dash and the Windows/Super keyboard key. As far as I know, normal behaviour is: Tap: Open Dash Hold: Show keyboard shortcut overlay However, the behaviour I'm experiencing is: Tap: Show keyboard shortcut overlay (after a short delay) Double Tap: Open Dash Hold: Show keyboard shortcut overlay What could cause this, and how do I fix it? I'm on a fresh 12.10 (Quantal) installation.

    Read the article

  • New "delay" keyword for JavaScript

    - by Van Coding
    I had a great idea for a new javascript keyword "delay", but I don't know what I can do to bring it to the new specification. Also I want to know what you guys think about it and if it's even realistic. What does the delay keyword ? The delay keyword does nothing more than stop the execution of the current stack and immediately continues to the next "job" in the queue. But that's not all! Instead of discarding the stack, it adds it to the end of the queue. After all "jobs" before it are done, the stack continues to execute. What is it good for? delay could help make blocking code non-blocking while it still looks like synchronous code. A short example: setTimeout(function(){ console.log("two"); },0); console.log("one"); delay; //since there is currently another task in the queue, do this task first before continuing console.log("three"); //Outputs: one, two, three This simple keyword would allow us to create a synchronous-looking code wich is asynchronous behind the scenes. Using node.js modules, for example, would no longer be impossible to use in the browser without trickery. There would be so many possibilites with such a keyword! Is this pattern useful? What can I do to bring this into the new ECMAscript specification? Note: I asked this previously on Stack Overflow, where it was closed.

    Read the article

  • How to copy existing movie files on ipad to watch?

    - by Rob Van Dam
    I have an iPad and a desktop running Ubuntu 12.04 with lots of movie files of various formats (avi, mp4, m4v, etc). Is it possible to transfer those files from within linux directly to the iPad (without using iTunes to sync) over USB and then play those movies on the iPad without reformatting/resizing all of them? I've used iTunes in a Windows XP virtualbox instance with other iOS mobile devices before but I would prefer a pure linux approach if possible. (I'm creating this question so that I can share the solution I found because I failed to find a simple, satisfactory answer elsewhere).

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >