Search Results

Search found 24734 results on 990 pages for 'floating point conversion'.

Page 491/990 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • windows service "soft link"

    - by fred smith
    I would like to be able to have a nice management tool to allow the rollout of different versions of a windows service over time. for example I would like to deploy my software (windows service) in version numbered folders, e.g. c:\wss\v1.0 c:\wss\v1.1 etc however I dont want to have to reinstall the windows service each time but rather would like to be able to easily point the windows service manager to the new folder. Are there tools to get this done? NB: I have used Windows Junctions before (from sysinternals) however I am wondering if there is a nice GUI tool to do this.

    Read the article

  • What can go wrong with a GLIBC upgrade?

    - by Sevenless
    I recently installed a piece of software that my group needs for a research project starting next September. Turns out the software has a known crash bug when used with glibc 2.12.1. My boss asked if we can upgrade glibc on the server that's supposed to run it. Cue my skeptical silence.... At some point, I got it into my brain that messing with glibc was about as good an idea as messing with a hungry puma; however, I've been unable to determine the source of this belief. So, if I go ahead with this: Am I doing something flagrantly stupid (e.g. I won't fix my problem, I will brick my server, or I will initiate a zombie apocalypse)? What can go wrong? What is likely to go wrong? How do I avoid the answers to 2 and 3?

    Read the article

  • Website deployment - managing uploaded content?

    - by Legion
    I'm a programmer by trade, "server administrator" by company necessity. We're looking at dumping the old painful "update site by FTP upload" style of deployment. Having the webserver check out the latest code base from version control into a folder and having a "current" symlink point to the latest checkout (allowing for easily stepping back to an older version by changing the symlink) seems to be the way we want to go. But I have a question: what's a good practice for dealing with user-uploaded content? This stuff isn't in version control. I have a couple of ideas for dealing with this, but what is the smart, accepted practice?

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

  • Cannot install SQL Server (2012) PowerPivot for SharePoint, always fails Sharepoint Version check

    - by ProfessionalAmateur
    Trying to install a fresh install of Sharepoint 2010 (w/ SP1) and SQL Server 2012 PowerPivot for Sharepoint. The prerequisites clearly show that Sharepoint 2010 SP1 is needed, which we have installed. However after when trying to install the SQL Server portion we consistently fail the rule SharePoint version requirement for PowerPivot for SharePoint' validation in theSQL Server` install process. Here is the process we are following: 1. install Sharepoint 2010 2. install Sharepoint 2010 SP1 3. install SQL Server 2012 PowerPivot for SharePoint Here is a screen shot of the error and the log file error. We are completely stuck at this point, anyone run into this before?

    Read the article

  • Is there a version of the Arial or Tahoma font with monospaced digits and spaces?

    - by rossmcm
    The digits in the Arial font supplied with Windows are monospaced, in that they each take up the same horizontal space, but they seem to have neglected to provide a "monospaced" version of the space character. This means that you can't format a column of digits right-justified in (say) 12 spaces and have the right-hand edge be aligned. For example: 1 12 123 1234 12345 1234567 12345678 123456789 1234567890 works because the font used for code examples has spaces the same width as digits. This however doesn't work if the same text is displayed in Arial (I can't demonstrate because I can't figure out how to defeat SU's reformatting at the moment!). It just so happens that with Tahoma 8 point you can cheat because a space is exactly half the number of pixels as a digit, but that is messy and very specific.

    Read the article

  • how a pure functional programming language manage without assignment statements?

    - by Gnijuohz
    When reading the famous SICP,I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3.I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about,I am kind of surprised that there are some functional programming languages(not Scheme of course) can do without assignments. Let use the example the book offers,the bank account example.If there is no assignment statement,how can this be done?How to change the balance variable?I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory,this must can be done too. I learned C,Java,Python and use assignments a lot in every program I wrote.So it's really an eye-opening experience.I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact(if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance) (lambda (amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) "Insufficient funds"))) This changed the balance by set!.To me it looks a lot like a class method to change the class member balance. As I said,I am not familiar with functional programming languages,so if I said something wrong about them,feel free to point out.

    Read the article

  • Pythonic use of the isinstance function?

    - by Pace
    Whenever I find myself wanting to use the isinstance() function I usually know that I'm doing something wrong and end up changing my ways. However, in this case I think I have a valid use for it. I will use shapes to illustrate my point although I am not actually working with shapes. I am parsing XML configuration files that look like the following: <square> <width>7</width> </square> <rectangle> <width>5</width> <height>7</height> </rectangle> <circle> <radius>4</radius> </circle> For each element I create an instance of the Shape class and build up a list of Shape objects in a class called the ShapeContainer. Different parts of the rest of my application need to refer to the ShapeContainer to get certain shapes. Depending on what the code is doing it might need just rectangles, or it might operate on all quadrangles, or it might operate on all shapes. I have created the following function in the ShapeContainer class (the actual function uses a list comprehension but I have expanded it here for readability): def locate(self, shapeClass): result = [] for shape in self.__shapes: if isinstance(shape,shapeClass): result.append(shape) return result Is this a valid use of the isinstance function? Is there another way I can do this which might be more pythonic?

    Read the article

  • Why does my system use so much cache?

    - by Dave M G
    Previously, on my desktop computer running Ubuntu 14.04, I had 4GB RAM, which I thought should be plenty. However, after being on for a while, my computer would seem to get slow. I have a system resource monitor app in my Gnome panel, which I assume represents the available RAM (?). It shows a dark green area as being "Memory", and a light green area as "Cache". The "Cache" would slowly grow until it filled the whole graph, and then programs would get slow to load, or it would take a while to switch programs. I could alleviate the problem somewhat with this command, but eventually the computer cache fills up again, so it's only a bandaid: sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches" So, I figured I'd get more RAM, so I replaced one 2GB stick with an 8GB stick, and now I have 10 GB ram. And my "cache" still slowly maxes out and my computer slows as a result. Also, sometimes the computer starts out with "cache" maxed when I first boot and log in. Not always though, I don't know if there's a pattern that determines why it happens. Why is Ubuntu using up so much cache? Is 10GB not enough for Ubuntu? Here's what my system monitor looks like in my Gnome panel. The middle square shows RAM usage. The light green area is the "cache": This is my memory and swap history, which doesn't seem to include any information about "cache". I realize at this point I'm not totally clear on the difference between "cache" and "swap":

    Read the article

  • Using a Mac to share a VPN connection

    - by Luis Novo
    I am using an iMac to share a wired network connection with other devices in my house. I am using Apple's built-in sharing functionality which works very well. I have also been using Tunnelblick as an OpenVPN client. The two technologies work great when they are not used together. The moment I connect to my VPN, sharing stops working on all other devices; the whole point of this setup was for me to share my VPN connection. Is there a way to make Internet connection sharing and OpenVPN work together on the Mac? I am using Snow Leopard.

    Read the article

  • Simple monitoring utility with up/down statuses of the host's network connectivity and services

    - by Beaming Mel-Bin
    We've looked at many monitoring tools (SolarWinds, Zabbix, Nagios) through out the last 10 years but they never took hold because they are overly complicated. I am willing to try them again or something new at this point but with a much simpler goal: ping to check up and down of host tcp probes to test up and down of service notifications via e-mail web GUI prefer an OSS solution Wanted to know if someone has any recommendations on this. This could be a Windows or Linux application. Preferably without the reqirement of agents. I don't even need SNMP support but that may be nice for expanding once we have the above mentioned bare minimum in place.

    Read the article

  • Virtualbox shared folder mount from fstab fails; works once bootup is complete

    - by Ben
    I've got Ubuntu 13.10 installed in Virtualbox 4.3. The host machine is Windows. I have a couple of Virtualbox shared folders being mounted by /etc/fstab. Until recently this setup worked just fine, but after upgrading from Ubuntu 13.04 and Virtualbox 4.2 (at essentially the same time) the fstab mounting stopped working. I get the following error during boot: An error occurred while mounting /home/benme/Documents. keys:Press S to skip mounting or M for manual recovery Pressing M for manual recovery and then trying to mount manually also fails: root@benme-vb:~# cd /home/benme root@benme-vb:/home/benme# mount Documents /sbin/mount.vboxsf: mounting failed with the error: No such device But if I instead skip mounting during boot, wait for Unity to start and then mount manually in a shell, everything works fine: benme-vb ~ % ls Documents benme-vb ~ % sudo mount Documents [sudo] password for benme: benme-vb ~ % ls Documents # actual file list omitted Note that when I mount manually I'm letting mount take all the options from /etc/fstab, and it works. This suggests to me that it's some sort of timing issue, where Virtualbox isn't "ready" to provide the shared file mounts at the point /etc/fstab mounts are run during bootup. Here's the fstab line, just for completeness: Documents /home/benme/Documents vboxsf uid=benme,gid=benme,dmode=774,fmode=664 0 0 Is there something I can do about this from the Ubuntu side? Or does anyone happen to know more about this from the Virtualbox angle? I've found an old report on the Virtualbox bug-tracker with identical symptoms, but in that case the user had updated Virtualbox without updating their guest additions and resolving that fixed the problem; this isn't happening here, I've definitely got the 4.3 guest additions installed.

    Read the article

  • Exalytics Increases Customer Revenue, and Saves Time, Risk & Cost

    - by Mike.Hallett(at)Oracle-BI&EPM
    We are getting some great proof point stories now from our customers who are succeeding with the Exalytics in-memory system for OBI and Essbase.  See below for some recent testimony: San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings: according to independent assessment by Mainstay Salire, the district is on track to achieve substantial benefits from the Oracle Exalytics solution, including an $8.25 million increase in attendance revenue, $75,000 a year savings in operational efficiencies, and $1 million in hardware cost avoidance. NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI: it took only “3 days to get up and running” with Exalytics.  Video Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: “it was up and running within 4 days” with “more intuitive dashboards” and “up to 70x better performance” and “cheaper maintenance and lower total cost of ownership”. Video Sodexo chose Oracle Exalytics as their business analytics platform; accelerating Essbase “more than 8x” performance for more than 2,000 Excel-addin users, “significantly changing how people in information management now deal with data”.  Video Polk, Savvis, Nykredit, and Key Energy describe testing of the Oracle Exalytics In-Memory Machine: to “reach more users than we ever have before”, “to fly through the data without impeding the analytic process”, “drive our enterprise groups into this tool instead of having departmental solutions”, and the “advanced visualisation this product enables”.  Video

    Read the article

  • Monitor network connectivity in WP7 apps

    - by Daniel Moth
    Most interesting Windows Phone apps rely on some network service for their functionality. So at some point in your app you may need to know programmatically if there is network connection available or not. For example, the Translator by Moth app relies on the Bing Translation service for translations. When a request for translation (text or voice) is made, the network call may fail. The failure reason is not evident from any of the return results, so I check the connection to see if it is present. Dependent on that, a different message is shown to the user. Before the translation phase is even reached, at the app start up time the Bing service is queried for its list of  languages; in that case I don't want to show the user a message and instead want to be notified when the network is available in order to send the query out again. So for those two requirements (which I imagine are common in other apps) I wrote a simple wrapper MyNetwork static class to the framework APIs: Call once MyNetwork.MonitorNetworkAvailability() method so it monitors the network change At any time check the MyNetwork.IsConnected property to check for network presence Subscribe to its MyNetwork.ConnectionEstablished event Optionally, during debugging use its MyNetwork.ChangeStatus method to simulate a change in network status As usual, there may be better ways to achieve this, but this class works perfectly for my scenarios. You can download the code here: MyNetworks.cs. Comments about this post welcome at the original blog.

    Read the article

  • Multiple Subnets on home network... would this work?

    - by rockinthesixstring
    We are looking at renting out the basement suit in our home and want to offer internet as part of the package. I however do not want the downstairs tennent to have access to our network (home office = private data). We currently have a pfsense firewall as our gateway and a Windows Server 2003 box is doing our primary DHCP (192.168.0.0/24) and DNS. Here's what I'd like to do... I'd like to setup another subnet on the DHCP server (192.168.1.0/24), and hook in another wireless router (as access point only) and address it as 192.168.1.1... from there the router will hook into our primary switch and then out through the firewall. Will Server 2003 (If I add a 192.168.1.x/24 IP to the nic) serve DHCP to the devices that connect to the new router, and will it isolate that from our network? Thanks in advance... I'm very new to multiple subnets.

    Read the article

  • Internet very slow when upgrading to Ubuntu 9.10

    - by roojoo
    I was running Ubuntu 8.x on my desktop and everything worked fine. Im using wired internet and it worked perfectly, pages loaded pretty fast. However, when I decided to upgrade to 9.10 the upgrade failed at some point, however I was left with what appeared to be Ubuntu 9.10. Since then the internet has been weird. When I go to a website it takes at least 10 seconds for the page to display, however if Im on a site and navigate to other pages on the website it loads quickly. This never happened prior to the upgrade. I thought this may be due to the upgrade not installing correctly so I did a fresh install of Xubuntu 9.10 but the problems are still the same. Im writing this on a Vista machine over the wireless network and internet is fine. Does anyone have any ideas of the issue? Thanks.

    Read the article

  • Pointing Domain to VDS Directory

    - by Jonathan Sampson
    I've got a domain name that is managed through 000Domains.com. I also have a virtual dedicated server hosted with GoDaddy.com. Within my VDS, I created a folder /mysite and placed all of my website files there. I can test this through the ipaddress of my VDS, but I would now like to point my domain from 000Domains over to my sub-directory hosted on GoDaddy. How do I do this? Do I need to make any specific modifications to my VDS to inform it that one of the directories will be accessible from a domain name? I have access to Simple Control Panel, if that is of any relevance.

    Read the article

  • 12.04 No Sound - ALC888 / Radeon 3200HD

    - by Ross
    Evening all. I have a MSI U230 netbook, MV40 processor, 4Gb RAM with integrated ATI Radeon 3200HD grahics & an ALC888 codec integrated soundcard. It has HDMI out as well. I've tried a few distro's and have been around linux for a short time. I reckon I've settled on Ubuntu 12.04 (32bit) due to it doing pretty much everything I want it to. I'm working with a fresh install right now. I recently re-installed when I was going in circles trying to solve my problem before. I install Ubuntu and it works, except for the sound. I have tried things like reinstalling Alsa, editing my asound.conf file, installing HDA Verb and a few other things. Its at the point where I need to ask for help... Some outputs: ross@ross:~$ aplay -l ** List of PLAYBACK Hardware Devices ** card 0: SB [HDA ATI SB], device 0: ALC888 Analog [ALC888 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: HDMI [HDA ATI HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 ross@ross:~$ uname -r 3.2.0-34-generic-pae ross@ross:~$ lspci | grep VGA 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RS780M/RS780MN [Mobility Radeon HD 3200 Graphics] ross@ross:~$ lspci | grep Audio 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) 01:05.1 Audio device: Advanced Micro Devices [AMD] nee ATI RS780 HDMI Audio [Radeon HD 3000-3300 Series] Added options snd-hda-intel position_fix=1 to alsa-base.conf file Unmuted all in alsamixer Can anyone suggest anything more?

    Read the article

  • Is it possible to do a full Android backup without first rooting the phone?

    - by Howiecamp
    I'm running stock 2.1 on my Moto Droid and am interested in rooting. My (admittedly weak at this point) understanding is that, in order to perform a backup[*], you need to root first. But in order to root, you've got to replace the 2.1 image with a rooted 2.0.1 or a stock 2.0.1 and then a rooted 2.1. So there's no CYA protection given that you've got to take the risk of replacing the image in order to get root and then do a backup. [*] Ideally, I'd like to backup the stock 2.1 image AND my apps. Am I understanding this correctly, or is there a way to do a backup without first replacing the image?

    Read the article

  • Oracle and Cavium to work together on Java SE 8 on 64-bit ARMv8

    - by Henrik Stahl
    We have been working for some time on a standard Oracle JDK 8 port to the upcoming introduction of 64-bit servers based on the new ARMv8 micro architecture. At ARM TechCon 2013 in Santa Clara, California, we announced a roadmap with an expected GA in 2015. This project is going very well and is ahead of schedule. We will soon be at the point where we will make binaries available outside of Oracle - first in a managed beta program with select customers/partners, and sometime during the fall of 2014 as a public early access program. Unless something changes, we are looking at a early 2015 GA. We should be able to share a detailed ramp down and GA plan by JavaOne 2014. One of the things we (obviously) need to produce a high-quality port is hardware for development and QA. We are therefore happy to announce that we will be collaborating with Cavium on this project. Cavium has been a supporter of the Java ecosystem for a long time and we have numerous joint customers running various Java versions on Cavium MIPS and ARM-based hardware. Cavium has now agreed to provide us with development hardware and engineering resources so that we can certify and optimize the initial Oracle JDK 8 release on Cavium's ThunderX hardware. This is expected to improve quality and performance of JDK 8 on ARMv8 in general, as well as on Cavium's hardware. For more information: Cavium announcement on the ThunderX product family Cavium announcement on Oracle collaboration As a reminder, we plan to release the Oracle JDK 8 port to 64-bit ARMv8 under the royalty-free (for general purpose servers etc) Binary Code License, but we have no current plans to open source it.

    Read the article

  • How to implement physical effect, perspective effect on Android

    - by asedra_le
    I'm researching about 2D game for Android to implement an Android Game Project. My project looks nearly like PaperToss. Instance of throwing a page, my game will throw a coin. Suppose that I have a coin put in three-dimensional that have coordinates at A(x,y,z). I throw that point ahead, after 1/100 second, that coin move from A(x,y,z) to A'(x',y',z'). By this way, I have two problems need to solve. Determine the formulas can be used to compute the coordinates of the coin at time t. This problem is under-researching. I have no idea to solve this problem. Mapping three-dimensional points to a two-dimensional and use those new coordinates (a two-dimensional coordinates) to draw our coin on screen. I have found two solutions for this problem: Orthographic projection & Perspective projection However, my old friend said that OpenGL supports to solve problems like my problems. Any body have experiences about my problems? Help me please :) Thank for reading my question.

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • Hyper-V 2012 and P2000 SAS SAN

    - by user155950
    Hi I am having major problems setting up a Hyper-V 2012 cluster on a P2000 SAS SAN. Running System Center VMM 2012 SP1 I am unable to see any storage to create my cluster. Has anyone had experienced anything similar? Under fabric and storage I can't add the P2000, all I can do is use storage spaces in server manager to create a storage pool and virtual disk. This allows me to create a file share which I can add to VMM but I still can't see any disk to create a cluster. I am just about at the point where I want to tear my hair out wipe the servers and stick VMware on them because I know it works as I have set several systems up like this in the past. The Hyper-V servers can see the storage and in server manager on my management machine it seems to know both servers can see the same disk. VMM is running on the same machine and it can't see any disk. Help..... Thanks Mike

    Read the article

  • How to I access "Deny" message from a Lidgren client?

    - by TJ Mott
    I'm using the Lidgren v3 network for a UDP client/server networking model. On the server end, I'm initializing a NetServer object with the NetIncomingMessage.ConnectionApproval message type enabled. So the client is able to successfully connect and the first packet it sends is a login packet, containing a username and password supplied by the user. The server is receiving that and doing some black magic to authenticate, and everything works up to that point. If the login fails, the server calling NetIncomingMessage.SenderConnection.Deny("Invalid Login Credentials"). I want to know how to properly receive this deny message on the client. I'm getting the message, it shows up with a message type of NetIncomingMessage.StatusChanged. If I call ReadString on that message, I get a corrupted version of the string I passed to the Deny method on the server. The type of corruption varies, I've seen odd characters in there but in every case it's truncated and is way shorter than the string I entered. Any ideas? The official documentation is sparse on this topic. I could use pointers from anyone who has successfully used the Lidgren library and uses the Accept or Deny methods. Also, if I don't do any authentication and just Approve() the connection every time, stuff actually works just fine and I'm getting reliable two-way UDP traffic. (And lastly, Stack Exchange said I don't have enough reputation to use the "Lidgren" tag....???)

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >