Search Results

Search found 203 results on 9 pages for 'austin lamb'.

Page 1/9 | 1 2 3 4 5 6 7 8 9  | Next Page >

  • SQL in the City - Austin 2012

    A free day of training in Austin, TX with Grant Fritchey, Steve Jones and a few others. Join us to learn about SQL Server and how you can more efficiently work in your job every day. Learn Agile Database Development Best PracticesAgile database development experts Sebastian Meine and Dennis Lloyd are running day-long classes designed to complement Red Gate’s SQL in the City US tour. Classes will be held in San Francisco, Chicago, Boston and Seattle. Register Now.

    Read the article

  • Austin Texas - Linux Against Poverty 2010

    <b>Blog of Helios:</b> "It's spring time in Texas. The Bluebonnets are fixin' to get ready to bloom, today's temperature is going to be around 80 degrees Fahrenheit and a solid date for the second annual Linux Against Poverty is, with a fair amount of certainty... official."

    Read the article

  • Penguins Converge on Austin Texas

    <b>Blog of Helios:</b> "While several cities and states across the US have hosted Linux events, Texas has yet to be home to efforts such as the Ohio LinuxFest SCALE or The Atlanta Linux Fest That is until now."

    Read the article

  • Jimmy Bogard to teach next MVC Boot Camp in Austin, TX on May 26th

    Jimmy Bogard is again teaching the MVC Boot Camp from Headspring.  http://www.headspringsystems.com/services/agile-training/mvc-training/ The class runs 3 days from May 26-28.  Give us a call at 512-459-2260 to inquire about an available discount. ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Austin Group Prepares for Linux Against Poverty

    <b>Blog of Helios:</b> "For those that do not know, Linux Against Poverty is an annual event organized by Lynn Bender that gathers some of the top tech people in the area and assembles them to evaluate, triage, repair and then install the Linux Operating System on those computers."

    Read the article

  • How to check if a cdrom is in the tray remotely (via ssh)?

    - by adempewolff
    I have a server running Ubuntu 10.04 (it's on the other side of the world and I haven't built up the wherewithal to upgrade it remotely yet) and I have been told that there is a CD in one of it's two CD drives. I want to rip an image of the cd and then download it to my local computer (I don't need help with either of these steps). However, I cannot seem to confirm whether or not there actually is a CD in the drive as I was told. It did not automatically mount anywhere (which I'm thinking might just be a result of it being a headless server not running X, nautilus, or any of the other nice user friendly things). There are two CD drives connected via SCSI: austin@austinvpn:/proc/scsi$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD400EB-75CP Rev: 06.0 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: Lite-On Model: LTN486S 48x Max Rev: YDS6 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 01 Lun: 00 Vendor: SAMSUNG Model: CD-R/RW SW-248F Rev: R602 Type: CD-ROM ANSI SCSI revision: 05 However when I try mounting either of these devices (and every other device that could possibly be the cd-drive), it says no medium found: austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd1 /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd0 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom1 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrw /cdrom mount: no medium found on /dev/sr1 Here are the contents of my /dev folder: austin@austinvpn:/proc/scsi$ ls /dev agpgart loop6 ram6 tty10 tty38 tty8 austinvpn loop7 ram7 tty11 tty39 tty9 block lp0 ram8 tty12 tty4 ttyS0 bsg mapper ram9 tty13 tty40 ttyS1 btrfs-control mcelog random tty14 tty41 ttyS2 bus mem rfkill tty15 tty42 ttyS3 cdrom net root tty16 tty43 urandom cdrom1 network_latency rtc tty17 tty44 usbmon0 cdrw network_throughput rtc0 tty18 tty45 usbmon1 char null scd0 tty19 tty46 usbmon2 console oldmem scd1 tty2 tty47 usbmon3 core parport0 sda tty20 tty48 usbmon4 cpu_dma_latency pktcdvd sda1 tty21 tty49 vcs disk port sda2 tty22 tty5 vcs1 dri ppp sda5 tty23 tty50 vcs2 ecryptfs psaux sg0 tty24 tty51 vcs3 fb0 ptmx sg1 tty25 tty52 vcs4 fd pts sg2 tty26 tty53 vcs5 full ram0 shm tty27 tty54 vcs6 fuse ram1 snapshot tty28 tty55 vcs7 hpet ram10 snd tty29 tty56 vcsa input ram11 sndstat tty3 tty57 vcsa1 kmsg ram12 sr0 tty30 tty58 vcsa2 log ram13 sr1 tty31 tty59 vcsa3 loop0 ram14 stderr tty32 tty6 vcsa4 loop1 ram15 stdin tty33 tty60 vcsa5 loop2 ram2 stdout tty34 tty61 vcsa6 loop3 ram3 tty tty35 tty62 vcsa7 loop4 ram4 tty0 tty36 tty63 vga_arbiter loop5 ram5 tty1 tty37 tty7 zero And here is my fstab file: austin@austinvpn:/proc/scsi$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/austinvpn-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=ed5520ae-c690-4ce6-881e-3598f299be06 /boot ext2 defaults 0 2 /dev/mapper/austinvpn-swap_1 none swap sw 0 0 Am I missing something/doing something wrong, or is there just no CD in the drive or is the drive possibly broken? Is there any nice command to list devices with mountable media? Thanks in advance for any help!

    Read the article

  • Nesting, grouping Sqlite syntax?

    - by Linda
    I can't for the life of me figure out this Sqlite syntax. Our database contains records like: TX, Austin OH, Columbus OH, Columbus TX, Austin OH, Cleveland OH, Dayton OH, Columbus TX, Dallas TX, Houston TX, Austin (State-field and a city-field.) I need output like this: OH: Columbus, Cleveland, Dayton TX: Dallas, Houston, Austin (Each state listed once... and all the cities in that state.) What would the SELECT statement(s) look like?

    Read the article

  • Webinar: NoSQL - Data Center Centric Application Enablement

    - by Charles Lamb
    NoSQL - Data Center Centric Application Enablement AUGUST 6 WEBINAR About the Webinar The growth of Datacenter infrastructure is trending out of bounds, along with the pace in user activity and data generation in this digital era. However, the nature of the typical application deployment within the data center is changing to accommodate new business needs. Those changes introduce complexities in application deployment architecture and design, which cascade into requirements for a new generation of database technology (NoSQL) destined to ease that complexity. This webcast will discuss the modern data centers data centric application, the complexities that must be dealt with and common architectures found to describe and prescribe new data center aware services. Well look at the practical issues in implementation and overview current state of art in NoSQL database technology solving the problems of data center awareness in application development. REGISTER NOW>> MORE INFORMATION >> NOTE! All attendees will be entered to win a guest pass to the NoSQL Now! 2013 Conference & Expo. About the Speaker Robert Greene, Oracle NoSQL Product Management Robert GreeneRobert Greene is a principle product manager / strategist for Oracle’s NoSQL Database technology. Prior to Oracle he was the V.P. Technology for a NoSQL Database company, Versant Corporation, where he set the strategy for alignment with Big Data technology trends resulting in the acquisition of the company by Actian Corp in 2012. Robert has been an active member of both commercial and open source initiatives in the NoSQL and Object Relational Mapping spaces for the past 18 years, developing software, leading project teams, authoring articles and presenting at major conferences on these topics. In his previous life, Robert was an electronic engineer developing first generation wireless, spread spectrum based security systems.

    Read the article

  • Berkeley DB Java Edition 4.0.103 Available

    - by charles.lamb
    We'd like to let you know that JE 4.0.103 is now at http://www.oracle.com/technology/software/products/berkeley-db/je/index.html. The patch release contains both small features and bug fixes, many of which were prompted by feedback on this forum. Some items to note: New CacheMode values for more control over cache policies, and new statistics to enable better interpretation of caching behavior. These are just one initial part of our continuing work in progress to make JE caching more efficient. Fixes for proper cache utilization calculations when using the -XX:+UseCompressedOops JVM option. A variety of other bug fixes. There is no file format or API changes. As always, we encourage users to move promptly to this new release.

    Read the article

  • Join Domain and Dos App

    - by Austin Lamb
    ok, So First off yes i have read all the related topics and those fixs are either out of date or dont work. i am running ubuntu 12.04 and i would like to add it to the win2008 server network, after i get that done i would like to mount the F:\ drive of the server somewhere on my linux machine where it can be identified as Drive F:\ by wine or Dosemu if i can achieve all of that i need to find out how to run a MS-Dos 16-bit Point-of-sales Graphic program in ubuntu whether that be through wine, dosemu, or dosBox. it does not matter it just has to be able to read and write to the servers F: drive, operate the dos app, and support LPt1 (i think) for printing reciepts and loading tickets. i am a decently knowledgeable windows tech, at least thats what my job description says.. but this is my first encounter with linux in a work environment, it could prove to very experience changing if i can just prove it as a practical theory and a reasonable solution, and get it to work.. the first step is to get it joined to the domain. i have likewise-open CLI and GUI versions, samba, and GADMIN-SAMBA installed in attempts to get any of them to work. any help in any area is greatly appreciated, especially with the domain joining since it is the first step and what i thought would be the easiest step..

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Oracle NoSQL Database Using FusionIO ioDrive2

    - by Charles Lamb
    We ran some benchmarks using FusionIO ioDrive2 SSD drives and Oracle NoSQL Database. FusionIO has published a whitepaper with the results of the benchmarks. "Results of testing showed that using an ioDrive2 for data delivered nearly 30 times more operations per second than a 300GB 10k SAS disk on a 90 percent read and 10 percent write workload and nearly eight times more operations per second on a 50 percent read and 50 percent write workload. Equally impressive, an ioDrive2 reduced latency over 700 percent (seven times) on inserts in a 90 percent read and 10 percent write workload and over 5800 percent (58 times) on reads in a 50 percent read and 50 percent write workload."

    Read the article

  • Encoding arbitrary data into numbers?

    - by Pekka
    Is there a common method to encode and decode arbitrary data so the encoded end result consists of numbers only - like base64_encode but without the letters? Fictitious example: $encoded = numbers_encode("Mary had a little lamb"); echo $encoded; // outputs e.g. 122384337422394237423 (fictitious result) $decoded = numbers_decode("122384337422394237423"); echo $decoded; // outputs "Mary had a little lamb"

    Read the article

  • Oracle NoSQL Database: Cleaner Performance

    - by Charles Lamb
    In an earlier post I noted that Berkeley DB Java Edition cleaner performance had improved significantly in release 5.x. From an Oracle NoSQL Database point of view, this is important because Berkeley DB Java Edition is the core storage engine for Oracle NoSQL Database. Many contemporary NoSQL Databases utilize log based (i.e. append-only) storage systems and it is well-understood that these architectures also require a "cleaning" or "compaction" mechanism (effectively a garbage collector) to free up unused space. 10 years ago when we set out to write a new Berkeley DB storage architecture for the BDB Java Edition ("JE") we knew that the corresponding compaction mechanism would take years to perfect. "Cleaning", or GC, is a hard problem to solve and it has taken all of those years of experience, bug fixes, tuning exercises, user deployment, and user feedback to bring it to the mature point it is at today. Reports like Vinoth Chandar's where he observes a 20x improvement validate the maturity of JE's cleaner. Cleaner performance has a direct impact on predictability and throughput in Oracle NoSQL Database. A cleaner that is too aggressive will consume too many resources and negatively affect system throughput. A cleaner that is not aggressive enough will allow the disk storage to become inefficient over time. It has to Work well out of the box, and Needs to be configurable so that customers can tune it for their specific workloads and requirements. The JE Cleaner has been field tested in production for many years managing instances with hundreds of GBs to TBs of data. The maturity of the cleaner and the entire underlying JE storage system is one of the key advantages that Oracle NoSQL Database brings to the table -- we haven't had to reinvent the wheel.

    Read the article

  • Passoker Online Betting Use of Oracle NoSQL Database

    - by Charles Lamb
    Here's an Oracle NoSQL Database customer success story for Passoker, an online betting house. http://www.oracle.com/us/corporate/customers/customersearch/passoker-1-nosql-ss-1863507.html There are a lot of great points made in the Solutions section, but as a developer the one I like the most is this one: Eliminated daily maintenance related to single-node points-of-failure by moving to Oracle NoSQL Database, which is designed to be resilient and hands-off, thus minimizing IT support costs

    Read the article

  • Blueprints API for Oracle NoSQL Database

    - by Charles Lamb
    Here's an implementation of the Blueprints API for Oracle NoSQL Database. https://github.com/dwmclary/blueprints-oracle-nosqldb Blueprints is a collection of interfaces, implementations, ouplementations, and test suites for the property graph data model. Blueprints is analogous to the JDBC, but for graph databases. As such, it provides a common set of interfaces to allow developers to plug-and-play their graph database backend. Moreover, software written atop Blueprints works over all Blueprints-enabled graph databases. Within the TinkerPop software stack, Blueprints serves as the foundational technology for: Pipes: A lazy, data flow framework Gremlin: A graph traversal language Frames: An object-to-graph mapper Furnace: A graph algorithms package Rexster: A graph server

    Read the article

  • Upload large files in .NET

    - by Austin
    I've done a good bit of research to find an upload component for .NET that I can use to upload large files, has a progress bar, and can resume the upload of large files. I've come across some components like AjaxUploader, SlickUpload, and PowUpload, to name a few. Each of these options cost money and only PowUpload does the resumable upload, but it does it with a java applet. I'm willing to pay for a component that does those things well, but if I could write it myself that would be best. I have two questions: Is it possible to resume a file upload on the client without using flash/java/Silverlight? Does anyone have some code or a link to an article that explains how to write a .NET HTTPHandler that will allow streaming upload and an ajax progress bar? Thank you, Austin [Edit] I realized I do need to be able to do resumable file uploads for my project, any suggestions for components that can do that?

    Read the article

  • postfix Mail filters not running behind a controlled enviornment

    - by Ashish
    Hi, I have deployed a postfix server for email receiving. On this I have configured SenderID + SPF milter, by referring to http://www.postfix.org/MILTER_README.html The command that I used is as follows: ./sid-filter -u postfix -p inet:10027@localhost -l Following are my settings in main.cf file: #Milter support for smtpd mail smtpd_milters = inet:localhost:10027, inet:localhost:10028 # Milters for non-SMTP mail. non_smtpd_milters = inet:localhost:10027, inet:localhost:10028 milter_default_action = reject # Postfix . 2.6 #milter_protocol = 6 # 2.3 . Postfix . 2.5 milter_protocol = 2 Now I have this observation: One of the postfix that is setup on AWS CentOS 5.5 is working fine and is able to receive mails on defined mx record. One of the similar postfix(as in step 1) that is setup behind one of the corporate firewalls is not able to receive any mails and is giving following type of error logs: connect from xxxxxx.austin.hp.com[xx.xxx.96.198] May 25 13:20:02 g2t0385g postfix/smtpd[11733]: C11F9B0194: client=xxxxxxx.austin.hp.com[15.217.96.198] May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: message-id= May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: milter-reject: END-OF-MESSAGE from xxxxxx.austin.hp.com[xx.xxx.96.198]: 5.7.1 Command rejected; from= to= proto=ESMTP helo= Here the 'sid-filter' is giving problems. Any idea, what I am doing wrong? Please help. Thanks in advance Ashish Sharma

    Read the article

  • Access Control Lists in Debian Lenny

    - by arbales
    So, for my clients to who have sites hosted on my server, I create user accounts, with standard home folders inside /home. I setup an SSH jail for all the collective users, because I really am against using a separate FTP server. Then, I installed ACL and added acl to my /etc/fstab — all good. I cd into /home and chmod 700 ./*. At this point users cannot see into other users home directories (yay), but apache can't see them either (boo) . I ran setfacl u:www-data:rx ./*. I also tried individual directories. Now apache can see the sites again, but so can all the users. ACL changed the permissions of the home folders to 750. How do I setup ACL's so that Apache can see the sites hosted in user's home folders AND 2. Users can't see outside their home and into others' files. Edit: more details: Output after chmod -R 700 ./* sh-3.2# chmod 700 ./* sh-3.2# ls -l total 72 drwx------+ 24 austin austin 4096 Jul 31 06:13 austin drwx------+ 8 jeremy collective 4096 Aug 3 03:22 jeremy drwx------+ 12 josh collective 4096 Jul 26 02:40 josh drwx------+ 8 joyce collective 4096 Jun 30 06:32 joyce (Not accessible to others users OR apache) setfacl -m u:www-data:rx jeremy (Now accessible to members apache and collective — why collective, too?) sh-3.2# getfacl jeremy # file: jeremy # owner: jeremy # group: collective user::rwx user:www-data:r-x group::r-x mask::r-x other::--- Solution Ultimately what I did was: chmod 755 * setfacl -R -m g::--- * setfacl -R -m u:www-data:rx *

    Read the article

  • What are the leading professional journals in software development?

    - by Austin Hyde
    In one of my classes, we were asked to research the top professional journals in our field. According to what I can dig up, the ACM and IEEE journals are the "best", as they come up at the top of my searches and this question. However, there are a dozen or so individually topic-ed journals for each, with no very clear measure of which one is most useful, popular, etc. For example, "IEEE Software" vs. "IEEE Transactions on Software Engineering". So, what do you consider to be the "leading" professional journals (specifically), and why? It doesn't have to be only ACM or IEEE, either. If you know of another, please add it.

    Read the article

  • How To Approach 360 Degree Snake

    - by Austin Brunkhorst
    I've recently gotten into XNA and must say I love it. As sort of a hello world game I decided to create the classic game "Snake". The 90 degree version was very simple and easy to implement. But as I try to make a version of it that allows 360 degree rotation using left and right arrows, I've come into sort of a problem. What i'm doing now stems from the 90 degree version: Iterating through each snake body part beginning at the tail, and ending right before the head. This works great when moving every 100 milliseconds. The problem with this is that it makes for a choppy style of gameplay as technically the game progresses at only 6 fps rather than it's potential 60. I would like to move the snake every game loop. But unfortunately because the snake moves at the rate of it's head's size it goes way too fast. This would mean that the head would need to move at a much smaller increment such as (2, 2) in it's direction rather than what I have now (32, 32). Because I've been working on this game off and on for a couple of weeks while managing school I think that I've been thinking too hard on how to accomplish this. It's probably a simple solution, i'm just not catching it. Here's some pseudo code for what I've tried based off of what makes sense to me. I can't really think of another way to do it. for(int i = SnakeLength - 1; i > 0; i--){ current = SnakePart[i], next = SnakePart[i - 1]; current.x = next.x - (current.width * cos(next.angle)); current.y = next.y - (current.height * sin(next.angle)); current.angle = next.angle; } SnakeHead.x += cos(SnakeAngle) * SnakeSpeed; SnakeHead.y += sin(SnakeAngle) * SnakeSpeed; This produces something like this: Code in Action. As you can see each part always stays behind the head and doesn't make a "Trail" effect. A perfect example of what i'm going for can be found here: Data Worm. Not the viewport rotation but the trailing effect of the triangles. Thanks for any help!

    Read the article

  • Problems with both LightDM and GDM using DisplayLink USB monitor

    - by Austin
    When I use LightDM, it will auto-login to desktop just fine. The only problem is Compiz doesn't work, and menus don't work. I can't right-click the desktop, and I can't select program menus in the top bar (I.e clicking "File" does nothing). When I use GDM, I only get a blank blue screen and the mouse cursor. I can't Ctrl+Alt+Backspace to restart, but I can Ctrl+Alt+F1 and Ctrl+Alt+F7 to switch modes. I don't think it's auto-logging me in, but I'm not sure. It plays the login screen noise. Will update with more information when I get home! EDIT: Okay, so I did a fresh install, just to ensure I hadn't borked something playing in the console. I reconfigured my setup as I did before, with the same results. Here's what I followed. The only difference is that instead of setting "vga=normal nomodeset" I set "GRUB_GFXPAYLOAD_LINUX = text". Also I only have the DisplayLink monitor configured in my xorg.conf file. At this point I'm using the open radeon driver, although I used the proprietary ati driver before. I'm not sure if I'm having a problem with: - X configuration - Graphics driver - DisplayLink driver - Unity - LightDM - Compiz - Or something else The resolution of the monitor is 800x480, 16bit. I tried setting a larger virtual resolution of 1200x720 (because the real resolution is lower than the recommended resolution), but it causes Ubuntu to boot into low graphics mode. When I get home I'm going to install the fglrx driver and see if it enables virtual resolutions, which may further enable my window manager to function properly.

    Read the article

  • Is it true that quickly closing a webpage opened from a search engine result can hurt the site's ranking?

    - by Austin ''Danger'' Powers
    My web designer recently told me that I need to be careful not to Google for my business' website, click on its search result link, then quickly close the page (or click back) too many times. He says "Google knows" that I didn't stay on the page, and could penalize my site for having a high click-through rate if it happens too much (it was something along those lines- I forget the exact wording). Apparently, it could look like the behavior of a visitor who was not interested in what they found (hence the alleged detrimental effect on the site's search ranking). This sounds hard to believe to me because I would not have thought any information is transmitted which tells Google (or anyone, for that matter) whether or not a website is still open in a browser (in my case Firefox v25.0). Could there possibly be any truth to this? If not, why might he have come to this conclusion? Is there some click-tracking or similar technology employed by search engines which does something similar? Looking forward to hearing everyone's thoughts.

    Read the article

1 2 3 4 5 6 7 8 9  | Next Page >