Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 73/558 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • JavaScript Image zoom with CSS3 Transforms, How to calculate Origin? (with example)

    - by Sunday Ironfoot
    I'm trying to implement an image zoom effect, a bit like how the zoom works with Google Maps, but with a grid of fix position images. I've uploaded an example of what I have so far here: http://www.dominicpettifer.co.uk/Files/MosaicZoom.html (uses CSS3 transforms so only works with Firefox, Opera, Chrome or Safari) Use your mouse wheel to zoom in/out. The HTML source is basically an outer div with an inner-div, and that inner-div contains 16 images arranged using absolute position. It's going to be a Photo Mosaic basically. I've got the zoom bit working using CSS3 transforms: $(this).find('div').css('-moz-transform', 'scale(' + scale + ')'); ...however, I'm relying on the mouse X/Y position on the outer div to zoom in on where the mouse cursor is, similar to how Google Maps functions. The problem is that if you zoom right in on an image, move the cursor to the bottom/left corner and zoom again, instead of zooming to the bottom/left corner of the image, it zooms to the bottom/left of the entire mosaic. This has the effect of appearing to jump about the mosaic as you zoom in closer while moving the mouse around, even slightly. That's basically the problem, I want the zoom to work exactly like Google Maps where it zooms exactly to where your mouse cursor position is, but I can't get my head around the Maths to calculate the transform-origin: X/Y values correctly. Please help, been stuck on this for 3 days now. Here is the full code listing for the mouse wheel event: var scale = 1; $("#mosaicContainer").mousewheel(function(e, delta) { if (delta > 0) { scale += 1; } else { scale -= 1; } scale = scale < 1 ? 1 : (scale > 40 ? 40 : scale); var x = e.pageX - $(this).offset().left; var y = e.pageY - $(this).offset().top; $(this).find('div').css('-moz-transform', 'scale(' + scale + ')') .css('-moz-transform-origin', x + 'px ' + y + 'px'); return false; });

    Read the article

  • JavaScript + Maths: Image zoom with CSS3 Transforms, How to set Origin? (with example)

    - by Sunday Ironfoot
    My Math skills really suck! I'm trying to implement an image zoom effect, a bit like how the Zoom works with Google Maps, but with a grid of fix position images. I've uploaded an example of what I have so far here: http://www.dominicpettifer.co.uk/Files/MosaicZoom.html (uses CSS3 transforms so only works with Firefox, Opera, Chrome or Safari) Use your mouse wheel to zoom in/out. The HTML source is basically an outer div with an inner-div, and that inner-div contains 16 images arranged using absolute position. It's going to be a Photo Mosaic basically. I've got the zoom bit working using CSS3 transforms: $(this).find('div').css('-moz-transform', 'scale(' + scale + ')'); ...however, I'm relying on the mouse X/Y position on the outer div to zoom in on where the mouse cursor is, similar to how Google Maps functions. The problem is that if you zoom right in on an image, move the cursor to the bottom/left corner and zoom again, instead of zooming to the bottom/left corner of the image, it zooms to the bottom/left of the entire mosaic. This has the effect of appearing to jump about the mosaic as you zoom in closer while moving the mouse around, even slightly. That's basically the problem, I want the zoom to work exactly like Google Maps where it zooms exactly to where your mouse cursor position is, but I can't get my head around the Maths to calculate the transform-origin: X/Y values correctly. Please help, been stuck on this for 3 days now. Here is the full code listing for the mouse wheel event: var scale = 1; $("#mosaicContainer").mousewheel(function(e, delta) { if (delta > 0) { scale += 1; } else { scale -= 1; } scale = scale < 1 ? 1 : (scale > 40 ? 40 : scale); var x = e.pageX - $(this).offset().left; var y = e.pageY - $(this).offset().top; $(this).find('div').css('-moz-transform', 'scale(' + scale + ')') .css('-moz-transform-origin', x + 'px ' + y + 'px'); return false; });

    Read the article

  • Symfony2 Entity to array

    - by Adriano Pedro
    I'm trying to migrate my flat php project to Symfony2, but its coming to be very hard. For instance, I have a table of Products specification that have several specifications and are distinguishables by its "cat" attribute in that Extraspecs DB table. Therefore I've created a Entity for that table and want to make an array of just the specifications with "cat" = 0... I supose the code is this one.. right? $typeavailable = $this->getDoctrine() ->getRepository('LabsCatalogBundle:ProductExtraspecsSpecs') ->findBy(array('cat' => '0')); Now how can i put this in an array to work with a form like this?: form = $this ->createFormBuilder($product) ->add('specs', 'choice', array('choices' => $typeavailableArray), 'multiple' => true) Thank you in advance :) # Thank you all.. But now I've came across with another problem.. In fact i'm building a form from an existing object: $form = $this ->createFormBuilder($product) ->add('name', 'text') ->add('genspec', 'choice', array('choices' => array('0' => 'None', '1' => 'General', '2' => 'Specific'))) ->add('isReg', 'choice', array('choices' => array('0' => 'Material', '1' => 'Reagent', '2' => 'Antibody', '3' => 'Growth Factors', '4' => 'Rodents', '5' => 'Lagomorphs'))) So.. in that case my current value is named "extraspecs", so i've added this like: ->add('extraspecs', 'entity', array( 'label' => 'desc', 'empty_value' => ' --- ', 'class' => 'LabsCatalogBundle:ProductExtraspecsSpecs', 'property' => 'specsid', 'query_builder' => function(EntityRepository $er) { return $er ->createQueryBuilder('e'); But "extraspecs" come from a relationship of oneToMany where every product has several extraspecs... Here is the ORM: Labs\CatalogBundle\Entity\Product: type: entity table: orders__regmat id: id: type: integer generator: { strategy: AUTO } fields: name: type: string length: 100 catnumber: type: string scale: 100 brand: type: integer scale: 10 company: type: integer scale: 10 size: type: decimal scale: 10 units: type: integer scale: 10 price: type: decimal scale: 10 reqcert: type: integer scale: 1 isReg: type: integer scale: 1 genspec: type: integer scale: 1 oneToMany: extraspecs: targetEntity: ProductExtraspecs mappedBy: product Labs\CatalogBundle\Entity\ProductExtraspecs: type: entity table: orders__regmat__extraspecs fields: extraspecid: id: true type: integer unsigned: false nullable: false generator: strategy: IDENTITY regmatid: type: integer scale: 11 spec: type: integer scale: 11 attrib: type: string length: 20 value: type: string length: 200 lifecycleCallbacks: { } manyToOne: product: targetEntity: Product inversedBy: extraspecs joinColumn: name: regmatid referencedColumnName: id HOw should I do this? Thank you!!!

    Read the article

  • AD domain on web servers behind NAT - DNS issues?

    - by Ant
    I'm trying to setup an AD domain to manage the security between two Windows Server 2008 webservers that will sooner or later use NLB to balance website requests. I've hit a problem which I think is a simple solution and is down to DNS. My website domain is mydomain.com. The two servers are running behind a NAT firewall on the 10.0.0.0 IP range. I've setup the AD domain to be called ad.mydomain.com (as recommended by MS and a few other answers to questions on here). The second web server however doesn't want to join the domain, and gives an error pinning the problem on DNS - "ensure that the domain name is typed correctly" even though it queries the SRV record successfully and gets the correct DC back - dc.ad.mydomain.com. Doing a dcdiag /test:dns on the DC gives the Delegation error 'DNS Server dc.mydomain.com Missing glue A record'. I have a feeling I need to add something to the public DNS so that it in some way knows about ad.mydomain.com. Can anyone suggest whether I'm on the right track in adding something to the public DNS? Or whether it's something else? Many thanks

    Read the article

  • Will the removal of NAT (with the use of IPv6) be bad for consumers? [closed]

    - by Jonathan.
    Possible Duplicate: How will IPv6 impact everyday users? (World IPv6 Day) As I understand when we have finally made the switch to IPv6 not only will NAT be unnecessary but it is incompatible with IPv6? Will that mean that ISPs will have to serve multiple IP addresses per customer? Will they provide a range of addresses for each customer or as each device connects will they get an IP address that isn't necessarily near that of the other devices in their house? But overall will this be bad for the Internet users? as surely it will allow ISPs to see exactly how many devices are being used, and so allow them to charge for the use of additional IP addresses? And then if that happens, what happens when you try to connect an extra device to your network? Will it simply not get an IP address? In my home we have about 15-20 devices connected at once, but for places where there are hundreds of devices, it seems like the perfect opportunity for ISPs to charge more? I think I may have it completely wrong, so is there somewhere where there is an explanation of who things will work when IPv6 becomes the norm?

    Read the article

  • Large scale perspective lights casting shadow maps, in the most optimized way?

    - by meds
    I'm using projected texture shadows coupled with lights to light a large sports field at night. To do this I'm using shadow cameras which I place in the position of the stadiums lights and shine it down on the field at the appropriate angle. The problem with this method is the textures to which I render the shadows into have to be very large so they can keep sufficient detail over the entire stadium. This is incredibly under optimized since at any given point the players attention is only directed on a small portion of the field meaning large chunks of the texture just take up space wit no benefits. However the issue is the lights need to be perspective based as they come from actual directional lights hovering over the stadium. The way to solve this, I believe, is to figure out in the shadow cameras view matrix it would be to place the actual camera to render from, and adjust the view matrix accordingly to the position it is. So my question is, how can I calculate the optimal position to put the shadow camera and calculate its view matrix such that the shadows it projects will appear to be coming from the light source rather than the camera?

    Read the article

  • Should we enforce code style in our large codebase?

    - by eighttrackmind
    By "code style" I mean 2 things: Style, eg. // bad if(foo){ ... } // good if (foo) { ... } Conventions and idiomaticity, where two ways of writing the same thing are functionally equivalent, but one is more idiomatic. eg. // bad if (fooLib.equals(a, b)) { ... } // good if (a == b) { ... } I think it makes sense to use an auto-formatter to enforce #1 automatically. So my question is specifically about #2. I like to break things down into pros and cons, here's what I've come up with so far: Pros: Used by many large codebases (eg. Google, jQuery) Helps make it a bit easier to work on new areas of the codebase Helps make code more portable (this is not necessarily true) Code style is automatic once you get used to it Makes it easier to fast-decline pull requests Cons: Takes engineers’ and code reviewers’ time away from more important things (like developing features) Code should ideally be rewritten every 2-3 years anyway, so it’s more important to focus on getting the architecture right, and achieving high test coverage Adds strain to code reviews (eg. “don’t do it this way, I like this other way better”) Even if I’ve been using a code style for a while, I still sometime have to pause and think about how to write a line better Having an enforced, uniform code style makes it hard to experiment with potentially better styles Maintaining a style guide takes a lot of incremental effort Engineers rarely read through the style guide. More often, it's cited in code reviews And as a secondary question: we also have many smaller repositories - should the same code style be enforced there?

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • Handling extremely large numbers in a language which can't?

    - by Mallow
    I'm trying to think about how I would go about doing calculations on extremely large numbers (to infinitum - intergers no floats) if the language construct is incapable of handling numbers larger than a certain value. I am sure I am not the first nor the last to ask this question but the search terms I am using aren't giving me an algorithm to handle those situations. Rather most suggestions offer a language change or variable change, or talk about things that seem irrelevant to my search. So I need a little guideance. I would sketch out an algorithm like this: Determine the max length of the integer variable for the language. If a number is more than half the length of the max length of the variable split it in an array. (give a little play room) Array order [0] = the numbers most to the right [n-max] = numbers most to the left Ex. Num: 29392023 Array[0]:23, Array[1]: 20, array[2]: 39, array[3]:29 Since I established half the length of the variable as the mark off point I can then calculate the ones, tenths, hundredths, etc. Place via the halfway mark so that if a variable max length was 10 digits from 0 to 9999999999 then I know that by halfing that to five digits give me some play room. So if I add or multiply I can have a variable checker function that see that the sixth digit (from the right) of array[0] is the same place as the first digit (from the right) of array[1]. Dividing and subtracting have their own issues which I haven't thought about yet. I would like to know about the best implementations of supporting larger numbers than the program can.

    Read the article

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • Ubuntu 12.04 - syslog showing "SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled"

    - by Tom G
    I have been seeing these random logs in syslog on our production system. There is no XFS setup. Fstab only shows local partitions, only EXT3 . There is nothing in crontabs either. The only file system related package I have installed is 'nfs-kernel-server' Kernel version is 3.2.0-31-generic . kernel: [601730.795990] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled kernel: [601730.798710] SGI XFS Quota Management subsystem kernel: [601730.828493] JFS: nTxBlock = 8192, nTxLock = 65536 kernel: [601730.897024] NTFS driver 2.1.30 [Flags: R/O MODULE]. kernel: [601730.964412] QNX4 filesystem 0.2.3 registered. kernel: [601731.035679] Btrfs loaded os-prober: debug: running /usr/lib/os-probes/mounted/10freedos on mounted /dev/vda1 10freedos: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/10qnx on mounted /dev/vda1 10qnx: debug: /dev/vda1 is not a QNX4 partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20macosx on mounted /dev/vda1 macosx-prober: debug: /dev/vda1 is not an HFS+ partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20microsoft on mounted /dev/vda1 20microsoft: debug: /dev/vda1 is not a MS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/30utility on mounted /dev/vda1 30utility: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/40lsb on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/70hurd on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/80minix on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/83haiku on mounted /dev/vda1 83haiku: debug: /dev/vda1 is not a BeFS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/90bsd-distro on mounted /dev/vda1 83haikuos-prober: debug: running /usr/lib/os-probes/mounted/90linux-distro on mounted /dev/vda1 os-prober: debug: running /usr/lib/os-probes/mounted/90solaris on mounted /dev/vda1 os-prober: debug: /dev/vda2: is active swap Why would this randomly show up? This also spawns multiple "jfsCommit" processes.

    Read the article

  • Are there any good examples of open source C# projects with a large number of refactorings?

    - by Arjen Kruithof
    I'm doing research into software evolution and C#/.NET, specifically on identifying refactorings from changesets, so I'm looking for a suitable (XP-like) project that may serve as a test subject for extracting refactorings from version control history. Which open source C# projects have undergone large (number of) refactorings? Criteria A suitable project has its change history publicly available, has compilable code at most commits and at least several refactorings applied in the past. It does not have to be well-known, and the code quality or number of bugs is irrelevant. Preferably the code is in a Git or SVN repository. The result of this research will be a tool that automatically creates informative, concise comments for a changeset. This should improve on the common development practice of just not leaving any comments at all. EDIT: As Peter argues, ideally all commit comments would be teleological (goal-oriented). Practically, if a comment is made at all it is often descriptive, merely a summary of the changes. Sadly we're a long way from automatically inferring developer intentions!

    Read the article

  • When decomposing a large function, how can I avoid the complexity from the extra subfunctions?

    - by missingno
    Say I have a large function like the following: function do_lots_of_stuff(){ { //subpart 1 ... } ... { //subpart N ... } } a common pattern is to decompose it into subfunctions function do_lots_of_stuff(){ subpart_1(...) subpart_2(...) ... subpart_N(...) } I usually find that decomposition has two main advantages: The decomposed function becomes much smaller. This can help people read it without getting lost in the details. Parameters have to be explicitly passed to the underlying subfunctions, instead of being implicitly available by just being in scope. This can help readability and modularity in some situations. However, I also find that decomposition has some disadvantages: There are no guarantees that the subfunctions "belong" to do_lots_of_stuff so there is nothing stopping someone from accidentally calling them from a wrong place. A module's complexity grows quadratically with the number of functions we add to it. (There are more possible ways for things to call each other) Therefore: Are there useful convention or coding styles that help me balance the pros and cons of function decomposition or should I just use an editor with code folding and call it a day? EDIT: This problem also applies to functional code (although in a less pressing manner). For example, in a functional setting we would have the subparts be returning values that are combined in the end and the decomposition problem of having lots of subfunctions being able to use each other is still present. We can't always assume that the problem domain will be able to be modeled on just some small simple types with just a few highly orthogonal functions. There will always be complicated algorithms or long lists of business rules that we still want to correctly be able to deal with. function do_lots_of_stuff(){ p1 = subpart_1() p2 = subpart_2() pN = subpart_N() return assembleStuff(p1, p2, ..., pN) }

    Read the article

  • iptables -P FORWARD DROP makes port forwarding slow

    - by Isaac
    I have three computers, linked like this: box1 (ubuntu) box2 router & gateway (debian) box3 (opensuse) [10.0.1.1] ---- [10.0.1.18,10.0.2.18,10.0.3.18] ---- [10.0.3.15] | box4, www [10.0.2.1] Among other things I want box2 to do nat and port forwarding, so that I can do ssh -p 2223 box2 to reach box3. For this I have the following iptables script: #!/bin/bash # flush iptables -F INPUT iptables -F FORWARD iptables -F OUTPUT iptables -t nat -F PREROUTING iptables -t nat -F POSTROUTING iptables -t nat -F OUTPUT # default default_action=DROP for chain in INPUT OUTPUT;do iptables -P $chain $default_action done iptables -P FORWARD DROP # allow ssh to local computer allowed_ssh_clients="10.0.1.1 10.0.3.15" for ip in $allowed_ssh_clients;do iptables -A OUTPUT -p tcp --sport 22 -d $ip -j ACCEPT iptables -A INPUT -p tcp --dport 22 -s $ip -j ACCEPT done # allow DNS iptables -A OUTPUT -p udp --dport 53 -m state \ --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p udp --sport 53 -m state \ --state ESTABLISHED,RELATED -j ACCEPT # allow HTTP & HTTPS iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -j ACCEPT iptables -A INPUT -p tcp -m multiport --sports 80,443 -j ACCEPT # # ROUTING # # allow routing echo 1 >/proc/sys/net/ipv4/ip_forward # nat iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # http iptables -A FORWARD -p tcp --dport 80 -j ACCEPT iptables -A FORWARD -p tcp --sport 80 -j ACCEPT # ssh redirect iptables -t nat -A PREROUTING -p tcp -i eth1 --dport 2223 -j DNAT \ --to-destination 10.0.3.15:22 iptables -A FORWARD -p tcp --sport 22 -j ACCEPT iptables -A FORWARD -p tcp --dport 22 -j ACCEPT iptables -A FORWARD -p tcp --sport 1024:65535 -j ACCEPT iptables -A FORWARD -p tcp --dport 1024:65535 -j ACCEPT iptables -I FORWARD -j LOG --log-prefix "iptables denied: " While this works, it takes about 10 seconds to get a password promt from my ssh command. Afterwards, the connection is as responsive as could be. If I change the default policy for my FORWARD chain to "ACCEPT", then the password promt is there imediatly. I have tried analysing the logs, but I can not spot a difference in the logs for ACCEPT/DROP in my FORWARD chain. Also I have tried allowing all the unprivileged ports, as box1 uses thoses for doing ssh to box2. Any hints? (If the whole setup seems strange to you - the point of the exercise is to understand iptables ;))

    Read the article

  • Laptop + External Monitor: Possible to Scale Mouse-crossover Position?

    - by Brian Lacy
    I have a laptop with a 16" screen @ 1366x768 resolution, and a 24" external monitor at 1920x1200 resolution. I'm extending the Windows 7 desktop onto the other screen. The 24" is the secondary monitor, and is positioned so the bottom of that screen lines up with the bottom of the 16" screen. Currently, if I move the mouse to the top portion of the 24" screen, then move left, the mouse will stop at the edge of the 24" screen until I move it down far enough to get into the range of the 16" screen. So that's all pretty standard; but I'm wondering if there's any software out there that will allow me to "scale" the mouse-crossover position, such that if I'm at the TOP of the 24" screen and move left, the mouse will cross to the TOP of the 16" screen, whereas if I'm at the BOTTOM of the 24" screen, crossing over will position the mouse at the BOTTOM of the 16" screen. So then, if the mouse cursor is on the 24" screen at (x,y) position (10,600), and I move 20 pixels left, the mouse is now on the 16" screen at position (1356,384). Anyone know of such a solution?? Note: I also use Ubuntu, so if there's a solution to this for X, but not Windows, I'd be interested in that also.

    Read the article

  • Hyper-V + RRAS NAT + Port Forwarding + RDP, can I get it all working together?

    - by Tom Bull
    I am running a Windows 2008 R2 server with various services running natively and two virtualised servers running on Hyper-V. The hardware server, I'm going to call it REAL1, has one external NIC, to which I can assign any of the following IP addresses: 1.2.3.4, 1.2.3.5, 1.2.3.6, etc... I need to achieve the following: I would like to be able to connect to REAL1 via remote desktop (RDP / port 3389) on one IP address (say 1.2.3.4), but also to the virtualised servers (I'm going to call them VIRTUAL1 and VIRTUAL2) on the other available IP addresses (say 1.2.3.5 and 1.2.3.6). The easiest way of doing this is to connect the virtual servers directly to the external interface and assign them each their own IP address. REAL1 will have 1.2.3.4, VIRTUAL1 will have 1.2.3.5 and VIRTUAL2 will have 1.2.3.6. Unfortunately, although I don't directly manage the two virtual servers, I have responsibility for their security. I would like to have some kind of firewall between the virtual servers an the internet. I have tried running a virtual machine firewall, but have found the performance on Hyper-V pretty terrible. The alternative I am now trying is Routing and Remote Access (RRAS): I have set up a virtual network called 'Internal' and REAL1 has a virtual network adapter connected to this virtual network I have connected each of the virtual servers to this network too I have assigned each server static IP addresses on this virtual network (REAL1 has 10.1.1.1, VIRTUAL1 has 10.1.1.2 and VIRTUAL2 has 10.1.1.3) I have installed RRAS and set up a NAT. The external interface is the external NIC, the internal interface is the virtual NIC connected to the internal network I have assigned all the available external IP addresses to the external NIC on REAL1. The virtual servers have been set up appropriately such that their default gateway is pointing to 10.1.1.1 and they can both access externally. Success! The RRAS is routing packets. The problem I have is that when I try to port forward services from the external IP address on REAL1, it only works if there is not already a service bound to the port. Remote desktop 'greedily' binds to every available IP address on port 3389 on REAL1 so I can't selectively forward incoming traffic for 1.2.3.5:3389 to 10.1.1.2:3389. RRAS will allow me to set up this port forwarding, and no errors come up. It just doesn't work. So the question I have is: Is there a better way of doing this? Or at least is there a way of resolving the apparant conflict between RRAS and everything else on the physical server?

    Read the article

  • How do you conquer the challenge of designing for large screen real-estate?

    - by Berin Loritsch
    This question is a bit more subjective, but I'm hoping to get some new perspective. I'm so used to designing for a certain screen size (typically 1024x768) that I find that size to not be a problem. Expanding the size to 1280x1024 doesn't buy you enough screen real estate to make an appreciable difference, but will give me a little more breathing room. Basically, I just expand my "grid size" and the same basic design for the slightly smaller screen still works. However, in the last couple of projects my clients were all using 1080p (1920x1080) screens and they wanted solutions to use as much of that real estate as possible. 1920 pixels across provides just under twice the width I am used to, and the wide screen makes some of my old go to design approaches not to work as well. The problem I'm running into is that when presented with so much space, I'm confronted with some major problems. How many columns should I use? The wide format lends itself to a 3 column split with a 2:1:1 split (i.e. the content column bigger than the other two). However, if I go with three columns what do I do with that extra column? How do I make efficient use of the screen real estate? There's a temptation to put everything on the screen at once, but too much information actually makes the application harder to use. White space is important to help make sense of complex information, but too much makes related concepts look too separate. I'm usually working with web applications that have complex data, and visualization and presentation is key to making sense of the raw data. When your user also has a large screen (at least 24"), some information is out of eye sight and you need to move the pointer a long distance. How do you make sure everything that's needed stays within the visual hot points? Simple sites like blogs actually do better when the width is constrained, which results in a lot of wasted real estate. I kind of wonder if having the text box and the text preview side by side would be a big benefit for the admin side of that type of screen? (1:1 two column split). For your answers, I know almost everything in design is "it depends". What I'm looking for is: General principles you use How your approach to design has changed I'm finding that i have to retrain myself how to work with this different format. Every bump in resolution I've worked through to date has been about 25%: 640 to 800 (25% increase), 800 to 1024 (28% increase), and 1024 to 1280 (25% increase). However, the jump from 1280 to 1920 is a good 50% increase in space--the equivalent from jumping from 640 straight to 1024. There was no commonly used middle size to help learn lessons more gradually.

    Read the article

  • Crystal Reports Programmatic Image Resizing... Scale?

    - by C. Griffin
    I'm working with a Crystal Reports object in Visual Studio 2008 (C#). The report is building fine and the data is binding correctly. However, when I try to resize an IBlobFieldObject from within the source, the scale is getting skewed. Two notes about this scenario. Source image is 1024x768, my max width and height are 720x576. My math should be correct that my new image size will be 720x540 (to fit within the max width and height guidelines). The ratio is wrong when I do this though: img = Image.FromFile(path); newWidth = img.Size.Width; newHeight = img.Size.Height; if ((img.Size.Width > 720) || (img.Size.Height > 576)) { double ratio = Convert.ToDouble(img.Size.Width) / Convert.ToDouble(img.Size.Height); if (ratio > 1.25) // Adjust width to 720, height will fall within range { newWidth = 720; newHeight = Convert.ToInt32(Convert.ToDouble(img.Size.Height) * 720.0 / Convert.ToDouble(img.Size.Width)); } else // Adjust height to 576, width will fall within range { newHeight = 576; newWidth = Convert.ToInt32(Convert.ToDouble(img.Size.Width) * 576.0 / Convert.ToDouble(img.Size.Height)); } imgRpt.Section3.ReportObjects["image"].Height = newHeight; imgRpt.Section3.ReportObjects["image"].Width = newWidth; } I've stepped through the code to make sure that the values are correct from the math, and I've even saved the image file out to make sure that the aspect ratio is correct (it was). No matter what I try though, the image is squashed--almost as if the Scale values are off in the Crystal Reports designer (they're not). Thanks in advance for any help!

    Read the article

  • How to determine Scale of Line Graph based on Pixels/Height?

    - by Dexter
    I have a problem due to my terrible math abilities, that I cannot figure out how to scale a graph based on the maximum and minimum values so that the whole graph will fit onto the graph-area (400x420) without parts of it being off the screen (based on a given equation by user). Let's say I have this code, and it automatically draws squares and then the line graph based on these values. What is the formula (what do I multiply) to scale it so that it fits into the small graphing area? vector<int> m_x; vector<int> m_y; // gets automatically filled by user equation or values int HeightInPixels = 420;// Graphing area size!! int WidthInPixels = 400; int best_max_y = GetMaxOfVector(m_y); int best_min_y = GetMinOfVector(m_y); m_row = 0; m_col = 0; y_magnitude = (HeightInPixels/(best_max_y+best_min_y)); // probably won't work magnitude = (WidthInPixels/(int)m_x.size()); m_col = m_row = best_max_y; // number of vertical/horizontal lines to draw ////x_magnitude = (WidthInPixels/(int)m_x.size())/2; Doesn't work well ////y_magnitude = (HeightInPixels/(int)m_y.size())/2; Doesn't work well ready = true; // we have values, graph it Invalidate(); // uses WM_PAINT

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • a question on common lisp

    - by kostas
    Hello people, I'm getting crazy with a small problem here, I keep getting an error and I cant seem to figure out why, the code is supposed to change the range of a list, so if we give it a list with values (1 2 3 4) and we want to change the range in 11 to fourteen the result would be (11 12 13 14) the problem is that the last function called scale-list will give back an error saying: Debugger entered--Lisp error: (wrong-type-argument number-or-marker-p nil) anybody has a clue why? I use aquamacs as an editor thanks in advance ;;finds minimum in a list (defun minimum(list) (car (sort list #'<))) ;;finds maximum in a list (defun maximum(list) (car (sort list #'>))) ;;calculates the range of a list (defun range(list) (- (maximum list) (minimum list))) ;;this codes scales a value from a list (defun scale-value(list low high n) (+ (/ (* (- (nth (- n 1) list) (minimum list)) (- high low)) (range list)) low)) ;and this code is supposed to scale the whole list (defun scale-list(list low high n) (unless (= n 0) (cons (scale-value list low high n) (scale-list list low high (- n 1))))) (scale-list '(0.1 0.3 0.5 0.9) 20 30 4)

    Read the article

  • How well does Solr scale over large number of facet values?

    - by Continuation
    I'm using Solr and I want to facet over a field "group". Since "group" is created by users, potentially there can be a huge number of values for "group". Would Solr be able to handle a use case like this? Or is Solr not really appropriate for facet fields with a large number of values? I understand that I can set facet.limit to restrict the number of values returned for a facet field. Would this help in my case? Say there are 100,000 matching values for "group" in a search, if I set facet.limit to 50. would that speed up the query, or would the query still be slow because Solr still needs to process and sort through all the facet values and return the top 50 ones? Any tips on how to tune Solr for large number of facet values? Thanks.

    Read the article

  • Building a world matrix

    - by DeadMG
    When building a world projection matrix from scale, rotate, translate matrices, then the translation matrix must be the last in the process, right? Else you'll be scaling or rotating your translations. Do scale and rotate need to go in a specific order? Right now I've got std::for_each(objects.begin(), objects.end(), [&, this](D3D93DObject* ptr) { D3DXMATRIX WVP; D3DXMATRIX translation, rotationX, rotationY, rotationZ, scale; D3DXMatrixTranslation(&translation, ptr->position.x, ptr->position.y, ptr->position.z); D3DXMatrixRotationX(&rotationX, ptr->rotation.x); D3DXMatrixRotationY(&rotationY, ptr->rotation.y); D3DXMatrixRotationZ(&rotationZ, ptr->rotation.z); D3DXMatrixScaling(&translation, ptr->scale.x, ptr->scale.y, ptr->scale.z); WVP = rotationX * rotationY * rotationZ * scale * translation * ViewProjectionMatrix; });

    Read the article

  • What goes in to making a web site that needs to scale?

    - by samoz
    I am planning to build an application that will get a large amount of traffic. (Please don't say I won't get traffic, this is for an internal network, so the traffic will be there. Just trying to avoid the 'You won't get that much traffic, don't worry about it.) What sorts of things do I need to do so that it doesn't simply crash under the load of a large amount of users? What becomes the limiting factors? Database stuff? I/O with front end? I've never really developed a serious web app before and am looking for some help.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >