Search Results

Search found 115 results on 5 pages for 'bias'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • Please Stop Voting Against a Candidate

    - by Brian Lanham
    DISCLAIMER:  This is not a post about “Romney” or “Obama”.  This is not a post for whom I am voting.  This is simply a post to address an issue that I cannot ignore any longer.  This two-party system that we have allowed to establish a foothold is killing this country.    More than 2 Options I was recently asked, “If you had to choose Romney or Obama who would you pick?”  I replied “Non sequiter.  The founders of this nation ensured that I never have to pick from only two candidates.”  But somehow that is the way this country’s citizens think.  I told someone last week that there are around 20 candidates for president and she was genuinely surprised.  (There are actually 25 candidates.)  She had no idea there were that many and, even though she knew there are more, she didn’t know any names beyond Romney and Obama.  Well, I am going to try and educate people like her on other options. Vote for a Candidate, not against another Candidate So this post is the first in a series with a little bit of information about each candidate for president.  I implore you…I beg you, please do your civic duty and conduct a little bit of investigation and research on your own to find the right candidate for you.  Hey, if your candidate is Romney or Obama, that’s fine.  As long as it’s an educated decision.  But please…stop voting against a candidate.  Start voting for a candidate. A List of CandidatesAs I mentioned, I am going to write a little something about each candidate and I’m going to go by alphabetical order by PARTY, then by CANDIDATE LAST NAME so as to not show any bias. P.S. – If you want to know the candidate I selected I am happy to tell you.  But that’s not what this series is about.PARTYCANDIDATEAmerica's Party   Tom HoeflingAmerican Third Position PartyMerlin MillerAmericans Elect PartyNo candidates met the requirement to enter into the online caucus.Constitution PartyVirgil GoodeDemocratic Party   Barack ObamaGrassroots Party   Jim CarlsonGreen Party   Jill SteinIndependent American Party   Will ChristensenJustice PartyRocky AndersonLibertarian Party   Gary JohnsonObjectivist PartyTom StevensPeace and Freedom Party   Roseanne BarrReform PartyAndre BarnettRepublican PartyMitt RomneySocialism and Liberation PartyPeta LindsaySocialist Equality PartyJerry WhiteSocialist Party USAStewart AlexanderSocialist Workers PartyJames HarrisIndependent Candidates Jeff BossRichard DuncanJerry Litzel Dean Morstad Jill Reed Randall TerrySheila Tittle Michael Vargo

    Read the article

  • Need help fixing DPKG errors after update from 12.04 to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" It should also be noted that it is just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy. Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I would also like to note i have cleaned apt cashe both through the terminal and manualy, i have tried installing them manually through dpkg from both the /var/cache/apt/archives/ location and from my own manually downloaded .deb files. i have tried using dpkg-reconfigure and i have used bleachbit to clean my system. I have also tested both my HDD and memory and found no significant errors to lead to the input/output errors. Quite frankly i am just out of options and have grown tired of trying to google a solution to this mess but still do not wish to pursue backing up settings and reinstalling the system. Any help would be appreciated. I am only interested in answers, please leave your feeling towards grammar, punctuation, and bias towards how a "post should look" at the door. If you dont have something to contribute towards solving my problem then you are just doing nothing but contributing to it. Thank you.

    Read the article

  • To clone or to automate a system installation?

    - by Shtééf
    Let's say you're setting up a cluster of servers performing the same task. Or say you're just setting up a bunch of different servers, but you expect to use a base configuration on all of your servers. Would it be better practice to create a base image and clone it, or to automate the installation and configuration? I occasionally end up in this argument with my boss, in situations where we're time-pressed. When he sees me struggle with perfecting the automation, his suggestion is often to clone the entire disk to the other machines. But my instinct has always been to avoid cloning. This is mostly from an Ubuntu perspective, but the question is fairly general. My reasons for avoiding cloning are: On a typical install, even if it's fresh, there are already several unique identifiers installed: filesystem UUIDs, SSH host keys, among others. These would have to be regenerated. Network needs to be reconfigured for each clone. This would need to be done off-line, of course, or the settings will conflict with other machines on the network. On the other hand, some of the cloning advantages are quite clear as well: (Initially?) less effort required than automating configuration. Tools exist to quickly address (some) of the above disadvantages. (I can see right through my own bias there.)

    Read the article

  • What are the advantages and disadvantages of the various virtual machine image formats?

    - by Matt
    Xen and Virtualbox etc both support a range of different virtual machine image formats. These are: vmdk, vdi, qcow & qcow2, hdd & vhd. Without any bias toward a particular product, I'm wanting to know what are the advantages and disadvantages of the various formats both from a features perspective, robustness and speed? One piece of info I discovered in a forum post was this: "The major difference is that VDI uses relatively large blocks (1MB) when growing an image, and thus has less overhead for block pointers etc. but isn't ultimately space efficient in the sense that if a single byte is non-zero in such a 1MB block the entire space is used. VMDK in contrast uses 64K blocks, and thus has more management overhead and generally a bit less disk space consumption What offsets this is that VDI is more efficient when it comes to snapshots." You might be thinking, I want to know this because I want to know which format to choose? Not exactly, I'm developing some software which utilises these formats and want to support one or more of them. Simplicity, large disks and ease of development are my main drivers.

    Read the article

  • Time Machine doesn't back up some folders/files (that it should)

    - by Eric
    MacBook Pro 17" (Snow Leopard) -- WD 2TB external drive MacBook Pro 13" (Snow Leopard) -- Seagate 1TB external drive I find that Time Machine sometimes doesn't back up new folders (and the files in them). This occurs both when I choose "Back Up Now" from the Time Machine icon in the Menu Bar and in TM's scheduled backups. These are not excluded folders (nor are then in the TM do-not-back-up list); they're perfectly normal folders (at various locations) inside my home folder. The only way to force them to be backed up is to restart the computer (unmounting & mounting the TM external disk does not help). There seems to be a correlation with new folders (i.e., it's more likely to happen that an entire new folder is not backed up), but this may just be observer bias (because those are the folders that I go check to see if they've been backed up). It's not computer dependent (it happens on two different computers). It's not external disk dependent (it happens on two different external disks). It's not time dependent (not restarting for several days does not fix the problem). What does a restart change that these other events don't? I'm considering deleting the /.fseventsd folder (without restarting the computer) to see if that helps. I haven't tried logging out and logging in (without restarting the computer).

    Read the article

  • Why am I getting kicked from an IRC channel in one OS (xp) but not in the other OS (7)? [closed]

    - by moshe
    I got kicked from an IRC channel. I have Windows XP and now if I'm trying to get into this specific channel, I get inside but I get immediately kicked out. I can come in again, and again get kicked out. It seems this is done automatically. Now I have also installed, on another hard drive in the same computer Windows 7. On Windows7 I can get into this same channel and never kicked out! It's the same computer, but different operating system(separate Hard Drives). How can it be? Is the KICK command bias towards the operating system I got KICKed in? Please explain to me how this thing is happening. PS: I forgot to mention that it doesn't matter if i change my IP or my nickname, I continue to kicked out from this channel. Again, in windows 7 I can get in without a problem. another thing that is good to mention is that i got kicked out when i was using windows XP, and not windows 7. i think that it could happened also with windows 2000 and vista, so i dont bother the OS itself, but why it's acting differently with a different OS?

    Read the article

  • EVENT RECAP: Oracle Day & Product Fair - Ft. Lauderdale

    - by cwarticki
    Are you attending any of the Oracle Days and other Events? They are fantastic!  Keep track of the Oracle Events by following @OracleEvents on Twitter.  Also, stay in the know by subscribing to one of the several Oracle Newsletters. Those will also keep you posted of upcoming in-person and webcast events. From the Oracle Events website, simply navigate to your geography and refine your options to locate what interests you. You can also perform keyword searches. Today, I had the opportunity to participate in the Oracle Day & Product Fair in Ft. Lauderdale, Florida  Thanks to those who stopped by to ask your support questions and watched me demo My Oracle Support features and best practices. (Bob Stanoch, Sales Consulting Manager giving the 2nd keynote address on Exadata below) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It was a pleasant surprise to run into my former Oracle colleague Josh Tieso.  Josh (pictured right) is Sr. Oracle DBA at United Healthcare. He used to work for Oracle Support years ago but for the last 6 years at UHC. Josh is a member of the ERP DBA team, working with Exalogic, Oracle ERP R12, & RAC. Along the exhibit/vendor row, I met with Marco Gangano, National Sales Manager at Mythics. It was great getting to meet Marco and I look forward to working with his company with regards to Support Best Practices. In addition, Lissette Paez (left) was representing TAM Training.  TAM Training is an Oracle University, award-winning training partner.  They cover training across the scope of Oracle products with 7 facilities in the U.S.  Lissette and I have done a couple of these Oracle Days before.  It's great to see familiar faces.  A little while ago, I was down in this area to work with Citrix with an onsite session on Support Best Practices.  Pablo Leon and Alberto Gonzalez (right)came to chat with me over at the Support booth.  They wanted to know when I was giving my session.  Unfortunately, not this time guys. I'm on booth duty only. Keep in touch. Many thanks to our sponsors: BIAS, Cloudera, Intel and TekStream Solutions.Come attend one of the many Oracle Days & other events planned for you. -Chris WartickiGlobal Customer Management

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • VSTO Development - Key Improvements In VS2010 / .NET 4.0?

    - by dferraro
    Hi all, I am trying to make a case to my bosses on why we should use VS2010 for an upcoming Excel Workbook VSTO application. I haven't used VSTO before but have used VBA. With 2010 just around the corner, I wanted to read about the improvements made to see if it was worth using 2010 to develop this application. So far I have read 2 major improvements are ease of deployments and also debugging / com interop improvements ... I was just wondering if there was anything else I wasn't aware of, or if anyone here is actually developing in VSTO and has used 2010 and both 2008 and could help make a case / arm me with information. The main concern of my bosses is deploying .NET 4.0 runtime on the Citrix servers here... however it seems that with 3.5, we would have to deploy the VSTO runtime and PIA's, etc... So really wouldn't deployments be easier with 2010 because installing just the 4.0 runtime is better than having to install the 'VSTO Runtime' as well as PIA's, etc? Or is there something I'm missing here? Anyone here deploy VSTO app in an enterprise and can speak to this? Also - I'm trying to also fight to use C# over VB.NET for this app. Does anyone know any key reasons why (except for my bias on preference of syntax) it would be better to use C# over VB for this? Any key features lacking in VB VSTO development? I've read about the VSTO Power Tools, and one of them describes LINQ enalbment of the Excel Object Model classes - however it says 'a set of C# classes'... Does anyone know if they literally mean C# - so this would not work with VB.NET, or do they just mean the code is written in C#? Anyone ever used these power tools with VB? I am going to download & play with it now, but any help again is greatly appreciated Thanks very much for any information.

    Read the article

  • How do I insert and query a DateTime object in SQLite DB from C# ?

    - by Soham
    Hi All, Consider this snippet of code: string sDate = string.Format("{0:u}", this.Date); Conn.Open(); Command.CommandText = "INSERT INTO TRADES VALUES(" + "\"" + this.Date + "\"" + "," +this.ATR + "," + "\"" + this.BIAS + "\"" + ")"; Command.ExecuteNonQuery(); Note the "this.Date" part of the command. Now Date is an abject of type DateTime of C# environment, the DB doesnt store it(somewhere in SQLite forum, it was written that ADO.NET wrapper automatically converts DateTime type to ISO1806 format) But instead of this.Date when I use sDate (shown in the first line) then it stores properly. My probem actually doesnt end here. Even if I use "sDate", I have to retrieve it through a query. And that is creating the problem Any query of this format SELECT * FROM <Table_Name> WHERE DATES = "YYYY-MM-DD" returns nothing, whereas replacing '=' with '' or '<' returns right results. So my point is: How do I query for Date variables from SQLite Database. And if there is a problem with the way I stored it (i.e non 1806 compliant), then how do I make it compliant

    Read the article

  • How do I work out IEEE 754 64-bit Floating Point Double Precision?

    - by yousef gassar
    enter code herehello i have done it in 32 but i could dont do it in 62bits please i need help I am stuck on this question and need help. I don't know how to work it out. This is the question. Below are two numbers represented in IEEE 754 64-bit Floating Point Double Precision, the bias of the signed exponent is -1023. Any particular real number ‘N’ represented in 64-bit form (i.e. with the following bit fields; 1-bit Sign, 11-bit Exponent, 52-bit Fraction) can be expressed in the form ±1.F2 × 2X by substituting the bit-field values using formula (IV.I): N = (-1) S × 1.F2 × 2(E – 1023) for 0 < E < 2047.........................….(IV.I) Where N= the number represented, S=Sign bit-value, E=Exponent=X +1023, F=Fraction or Mantissa are the values in the 1, 11 and 52-bit fields respectively in the IEEE 754 64-bit FP representation. Using formula (IV.I), express the 64-bit FP representation of each number as: (i) A binary number of the form:- ±1.F2 × 2X (ii) A decimal number of the form:- ±0.F10 × 10Y {limit F10 to 10 decimal places} Sign 0 1 Exponent 1000 0001 001 11 Fraction 1111 0111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 52 Sign 1 1 Exponent 1000 0000 000 11 Fraction 1001 0010 0001 1111 1011 0101 0100 0100 0100 0010 1101 0001 1000 52 I know I have to use the formula for each of the these but how do I work it out? Is it like this? N = (-1) S × 1.F2 × 2(E – 1023) = 1 x 1.1111 0111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 x 1000 0001 00111 (-1023)?

    Read the article

  • Relay WCF Service

    - by Matt Ruwe
    This is more of an architectural and security question than anything else. I'm trying to determine if a suggested architecture is necessary. Let me explain my configuration. We have a standard DMZ established that essentially has two firewalls. One that's external facing and the other that connects to the internal LAN. The following describes where each application tier is currently running. Outside the firewall: Silverlight Application In the DMZ: WCF Service (Business Logic & Data Access Layer) Inside the LAN: Database I'm receiving input that the architecture is not correct. Specifically, it has been suggested that because "a web server is easily hacked" that we should place a relay server inside the DMZ that communicates with another WCF service inside the LAN which will then communicate with the database. The external firewall is currently configured to only allow port 443 (https) to the WCF service. The internal firewall is configured to allow SQL connections from the WCF service in the DMZ. Ignoring the obvious performance implications, I don't see the security benefit either. I'm going to reserve my judgement of this suggestion to avoid polluting the answers with my bias. Any input is appreciated. Thanks, Matt

    Read the article

  • CouchDB, HDFS, HBase or which is right for my situation?

    - by Lucas
    Hello all, This question is regarding data storage systems such as CouchDB, HDFS and HBase, specifically, which is right. I am looking at making a simple and customized Document Management System for my organization. Basically, we need the ability to store some Word Documents, PDFs and other similar files. I also want to store metadata about these files (e.g., Author, Dates, etc). Usage permissions would also be handy, but that can probably be built using meta-data. I would also need the ability to full-text index. The ability to version, while not required would be extremely useful. I would like the ability to simply add hardware to expand the resources of the system and the system must support Network Attached Storage over the CIFS or NFS protocol(s). I have read about CouchDB, HDFS and HBase. My preferred programming language is C# as all of my end-users will be running Windows machines and I will want to make both web and winforms client implementations. My question is which solution best fits my needs? Based on my research it appears that CouchDB (utilizing the CouchDB-Lounge and CouchDB-Lucene) perfectly fits my needs. However, I am worried that since I have worked with CouchDB that I might be overlooking something useful for my needs in HDFS or HBase or something similar due to a bias. Any and all opinions are welcome as I am looking for the community input as I really do not want to make the wrong choice at the start of my project. Please ask if you need more information. I thank you all for your time, input and assistance.

    Read the article

  • Unable to change the system zone setting on Windows Server 2008 R2.

    - by Ganesh
    Hi All, I have an MFC application that tries to change the system zone setting on the Windows Server 2008 R2. I am using the SetTimeZoneInformation() API which fails with the error code 1314 .i.e. “A required privilege is not held by the client.”. Please refer the sample code below: TIME_ZONE_INFORMATION l_TimeZoneInfo; DWORD l_dwRetVal = 0; ZeroMemory(&l_TimeZoneInfo, sizeof(TIME_ZONE_INFORMATION)); l_TimeZoneInfo.Bias = -330; l_TimeZoneInfo.StandardBias = 0; l_TimeZoneInfo.StandardDate.wDay = 0; l_TimeZoneInfo.StandardDate.wDayOfWeek = 0; l_TimeZoneInfo.StandardDate.wHour = 0; l_TimeZoneInfo.StandardDate.wMilliseconds = 0; l_TimeZoneInfo.StandardDate.wMinute = 0; l_TimeZoneInfo.StandardDate.wMonth = 0; l_TimeZoneInfo.StandardDate.wSecond = 0; l_TimeZoneInfo.StandardDate.wYear = 0; CString l_csDaylightName = _T("India Daylight Time"); CString l_csStdName = _T("India Standard Time"); wcscpy(l_TimeZoneInfo.DaylightName,l_csDaylightName.GetBuffer(l_csDaylightName.GetLength())); wcscpy(l_TimeZoneInfo.StandardName,l_csStdName.GetBuffer(l_csStdName.GetLength())); ::SetLastError(0); if(0 == ::SetTimeZoneInformation(&l_TimeZoneInfo)) { l_dwRetVal = ::GetLastError(); CString l_csErr = _T(""); l_csErr.Format(_T("%d"),l_dwRetVal); } The MFC application has been developed using Visual Studio 2008 and is UAC aware i.e. the application has UAC enabled in its manifest file with the UAC execution level set to “HighestAvailable”. I have administrator privileges and when I run the application it still fails to change the system zone setting. Thanks in Advance, Ganesh

    Read the article

  • Perceptron Classification and Model Training

    - by jake pinedo
    I'm having an issue with understanding how the Perceptron algorithm works and implementing it. cLabel = 0 #class label: corresponds directly with featureVectors and tweets for m in range(miters): for point in featureVectors: margin = answers[cLabel] * self.dot_product(point, w) if margin <= 0: modifier = float(lrate) * float(answers[cLabel]) modifiedPoint = point for x in modifiedPoint: if x != 0: x *= modifier newWeight = [modifiedPoint[i] + w[i] for i in range(len(w))] w = newWeight self._learnedWeight = w This is what I've implemented so far, where I have a list of class labels in answers and a learning rate (lrate) and a list of feature vectors. I run it for the numbers of iterations in miter and then get the final weight at the end. However, I'm not sure what to do with this weight. I've trained the perceptron and now I have to classify a set of tweets, but I don't know how to do that. EDIT: Specifically, what I do in my classify method is I go through and create a feature vector for the data I'm given, which isn't a problem at all, and then I take the self._learnedWeight that I get from the earlier training code and compute the dot-product of the vector and the weight. My weight and feature vectors include a bias in the 0th term of the list so I'm including that. I then check to see if the dotproduct is less than or equal to 0: if so, then I classify it as -1. Otherwise, it's 1. However, this doesn't seem to be working correctly.

    Read the article

  • Are there any configurable parameters to the gpsd?

    - by danatel
    I use the gpsd daemon with my application. Sometimes, the gpsd ceases to work with no apparent reason (clean sky). Even the gpsmon monitor shows no fix. Are there any parameters which must be set? Or is it a hardware problem? I am surprissed that many satellites are visible but the "Stat" bitmap does not contain the bit 7 - ephemeris data available. Should i somewhat pre-configure my position to allow for correct ephemeris data? Here is my gpsmon screen: 127.0.0.1:2947:/dev/ttyS3 SiRF binary> ^[[4~ -¦¦¦¦¦¦¦¦¦¦¦ X ¦¦¦¦¦¦ Y ¦¦¦¦¦¦ Z ¦¦¦¦¦¦¦¦¦¦ North ¦¦¦¦ East ¦¦¦¦¦ Alt ¦¦¦¦¦¦¦¦¦¬ -Pos: 3949260 1166016 4856299 m 49.89411° 16.44920° 1379 m - -Vel: 0.0 0.0 0.0 m/s 0.0 0.0 0.0 climb m/s- -Week+TOW:1578+224837.06 Day: 2 14:27:17.06 Heading: 0.0° 0.0 speed m/s- -Skew: -13.025817 TZ: -7200 HDOP: 0.0 M1:00 M2: 00 - -Fix: 0 = - L¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ Packet type 2 (0x02) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- -¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬-¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ -Ch PRN Az El Stat C/N ? A --Version: - - 0 2 243 19 003f 40.4 -L¦¦¦¦¦¦¦ Packet Type 6 (0x06) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 1 10 249 68 003f 43.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 2 13 90 30 003f 40.9 --SVs: 0 Drift: 96506 Bias: 135976716 - - 3 7 66 67 003f 39.8 --Estimated GPS Time: 224837059 - - 4 5 295 49 003d 39.7 -L¦¦¦¦¦¦¦ Packet type 7 (0x07) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 5 8 210 69 003f 41.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 6 23 96 5 002d 28.0 --Max: 167.570Lat: 132.129Time: 0.075 MS: 02 - - 7 6 43 3 002d 23.1 -L¦¦¦¦¦¦¦ Packet type 9 (0x09) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 8 28 163 16 003f 39.8 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 9 0 0 0 0000 0.0 --SVs: 11 = 8 10 7 5 13 2 28 23 3 6 4 - -10 3 55 4 002d 24.7 -L¦¦¦¦¦¦¦ Packet type 13 (0x0D) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- -11 0 0 0 0000 0.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ L¦¦¦ Packet Type 4 (0x04) ¦¦¦--DGPS source: 1 (SBAS) Corrections: 12 - L¦¦¦¦¦¦¦ Packet type 27 (0x1B) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦-q

    Read the article

  • How to (kindly) ask your users to upgrade from IE6?

    - by nickf
    It's no secret at all that IE6 has been a major roadblock to the advancement of the web over the last few years. I couldn't count the number of hours I've spent bashing my head against a wall trying to fix or debug IE6 issues. The way I see it, there are two types of IE6 user. a) the poor corporate schmoe whose IT department doesn't want to upgrade in case something breaks, and b) the mums and dads of the world who think the internet is the blue E on their desktop (and I don't mean that in a nasty way). There's probably a couple of people who know about all the other browsers, but still choose to run IE6. They get what they deserve, IMO. Anyway, getting to the point, I'd say that 90% of my IE6-using visitors are in the the mums and dads category - they're not stupid, they just don't know WHY they should upgrade to IE7 or Firefox or whatever. How do I educate these people without pissing them off? Is there a nice and friendly website I can direct these people to, which explains the reasons for upgrading in plain language? Any mention of "security" or "web standards" I think would just come across as scary. I've just seen http://www.whatbrowser.org which seems to fit the bill nicely. It explains in very basic terms: what a web browser is why you'd want to upgrade it how old your current browser is (subtle hint to those with a 9 year old browser) ..aaaand it's in 22 languages. It's from Google but displays no bias (it links to Firefox, Chrome, Opera, Safari, Internet Explorer displayed in a random order).

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • What are good examples of perfectly acceptable approaches to development that are NOT test driven development (TDD)?

    - by markbruns
    The TDD cycle is test, code, refactor, (repeat) and then ship. TDD implies development that is driven by testing, specifically that means understanding requirements and then writing tests first before developing or writing code. My natural inclination is a philosophical bias in favor of TDD; I would like to be convinced that there are other approaches that now work well or even better than TDD so I have asked this question. What are examples of perfectly acceptable approaches that NOT test driven development? I can think of plenty approaches that are not TDD but could be a lot more trouble than what they are worth ... it's not moral judgement, it's just that they are cost more than they are worth ... the following are simply examples of things that might be ok as learning exercises, but approaches I'd find to be NOT acceptable in serious production and NOT TDD might include: Inspecting quality into your product -- Focusing efforts on developing a proficiency in testing/QA can be problematic, especially if you don't work on the requirements and development side first ... symptom of this include bug triaging where the developers have so many different bugs to deal with it, it is necessary to employ a form of triage -- each development cycle gets worse and worse, programmers work more and more hours, sleep less and less, struggle to keep going in death march until they are consumed. Superstition ... believing in things that you don't understand -- this would involve borrowing code that you believe has been proven or tested from somewhere, e.g. legacy code, a magic code starter wizard or an open source project, and you go forward hacking up a storm of modifications, sliding FaceBook Connect into your the user interface, inventing some new magic features on the fly (e.g. a mashup using the Twitter API, GoogleMaps API and maybe Zappos API), showing off your cool new "product" to a few people and then writing up a simple "specification" and list of "test cases" and turning that over to Mechanical Turk for testing.

    Read the article

  • What's a good way to integrate FB and Twitter into my commenting system (PHP)

    - by Jason
    Hi Guys, There are so many options out there for integration. At the moment I have comments that are posted on my articles, where a user types in their name and the comment. This is then sent to a moderation queue and displayed when approved. I want to acheive this: Comment with facebook login (ie facebook account listed as the name w/ avatar) Comment with twitter login (ie twitter account name listed as the name w/ avatar) Push comment from my website to twitter and to facebook I could go down a few paths as far as I know: Integrate with XFBML, which I don't like because I find it annoying to setup and messy. Integrate facebook comments system, although this can't push to twitter, or allow me to moderate comments from my backend (as far as I can tell i'd have to login under the facebook login for the dev account to moderate the comment) Find a php class that does open auth and integrate with both face book and twitter at once find a pre-created php class Anyone have a solution that will bias: a. easy to integrate b. lightweight c. is free Thanks for your suggestions in advance.

    Read the article

  • A good php framework in 2012

    - by Jormundir
    I've done a lot of googling around this, and practically all of the answers I find are pre 2011, and are answered in the usual, here are the 5 most popular frameworks... So I'd like to update this topic for 2012, I'm going to build a web application with a pretty complex back-end system driving it, and I'd like to use a framework so I don't have to reinvent the wheel. My application will be hugely user based, so I would appreciate a built in authentication/validation system. (When this is missing it takes me a good 2 weeks of intense and frivolous research to try to pick the "best" one (I don't want to roll my own, I don't think I'd do a better job than what's out there). I've looked into a tried a few, so I'll give you what I like and don't like, but I don't want to bias answers too much. I don't like: Frameworks that auto-generate bloated code. If they have the feature, fine, but if I have to use it, I get frustrated. Backwards compatibility with php4, eww. I don't need backwards compatibility at all. I like: Getting up and running quickly (but without all the auto-generation bogus), what I mean by this is that all the essentials are there, so I don't have to come to a grinding halt to research what the best 3rd party plugin is to get the feature I need. Thorough documentation, good tutorials. Good presentation of these materials. Please explain why your framework suggestion is good, don't just give the name of a framework without any justification. Thanks!

    Read the article

  • How should I convert a physical drive to a VHD for use with VirtualPC?

    - by RBerteig
    I have the hard disks from a PC that was happily running Windows Me until is it suffered an unknown hardware failure. The drives are intact, and can be mounted and read on other PCs. We have data backups, but there is licensed software installed that may not be possible to migrate to newer versions running on a more modern platform making the idea of just booting a virtual image attractive. Is it possible to make VHDs from the drives such that I can boot them in VirtualPC? If not VirtualPC, would it be possible in any other virtualization tool? Edit: Some more details.... The system was running Windows Me, but upgraded from Windows 95 (or possibly 98). It can't have been more than a Pentium II, but I will have to look at the motherboard to confirm that. There were no "exotic" devices installed, and nothing beyond the usual legacy stuff that would need to survive into a virtual machine. The licensed software did not have a dongle, so I won't need to worry about virtualizing a physical dongle of some kind. Licenses were probably died to the disk serial number. There were two HDs, both IDE. The boot disk is about 6GB, and the spare data disk is 12GB, but nearly empty. I have a small bias in favor of VirtualPC just because its free and I've used it successfully in the past. But this is a good excuse to revisit the state of the art. I do know from direct experience that it is possible to install and boot DOS 5.0 and Win95 in VirtualPC, but the VM extensions weren't available so the experience isn't as seamless as I would have liked. A very old DirectX game that failed miserably under XP SP2 runs really nicely on that VM, and actually plays better in a lot of ways than it did on period hardware, so that gives me hope that this is possible. Edit 2: Well, I'm closer than I was when I asked... so thanks to all for helpful suggestions and hints to what I should be trying. I used WinImage to copy the disks, and VirtualPC 2007 to attempt to boot. So far, I have it booting in safe mode, but hanging with a black screen otherwise. I strongly suspect that the copy of Artisoft Lantastic 8.0 (anyone else remember them?) that is still installed for networking with even older PCs that mostly don't exist any more is the culprit there. In my infinite free time, I will try to resolve the differences between a Safe Mode boot and a normal boot, and feel that it is likely to yield to pressure. I'd accept more than one answer if I could... this isn't as black and white a question as the one accepted answer convention assumes.

    Read the article

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Not getting desired results with SSAO implementation

    - by user1294203
    After having implemented deferred rendering, I tried my luck with a SSAO implementation using this Tutorial. Unfortunately, I'm not getting anything that looks like SSAO, you can see my result below. You can see there is some weird pattern forming and there is no occlusion shading where there needs to be (i.e. in between the objects and on the ground). The shaders I implemented follow: #VS #version 330 core uniform mat4 invProjMatrix; layout(location = 0) in vec3 in_Position; layout(location = 2) in vec2 in_TexCoord; noperspective out vec2 pass_TexCoord; smooth out vec3 viewRay; void main(void){ pass_TexCoord = in_TexCoord; viewRay = (invProjMatrix * vec4(in_Position, 1.0)).xyz; gl_Position = vec4(in_Position, 1.0); } #FS #version 330 core uniform sampler2D DepthMap; uniform sampler2D NormalMap; uniform sampler2D noise; uniform vec2 projAB; uniform ivec3 noiseScale_kernelSize; uniform vec3 kernel[16]; uniform float RADIUS; uniform mat4 projectionMatrix; noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } mat3 CalcRMatrix(vec3 normal, vec2 texcoord){ ivec2 noiseScale = noiseScale_kernelSize.xy; vec3 rvec = texture(noise, texcoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); return mat3(tangent, bitangent, normal); } void main(void){ vec2 TexCoord = pass_TexCoord; vec3 Position = CalcPosition(); vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); mat3 RotationMatrix = CalcRMatrix(Normal, TexCoord); int kernelSize = noiseScale_kernelSize.z; float occlusion = 0.0; for(int i = 0; i < kernelSize; i++){ // Get sample position vec3 sample = RotationMatrix * kernel[i]; sample = sample * RADIUS + Position; // Project and bias sample position to get its texture coordinates vec4 offset = projectionMatrix * vec4(sample, 1.0); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // Get sample depth float sample_depth = texture(DepthMap, offset.xy).r; float linearDepth = projAB.y / (sample_depth - projAB.x); if(abs(Position.z - linearDepth ) < RADIUS){ occlusion += (linearDepth <= sample.z) ? 1.0 : 0.0; } } out_AO = 1.0 - (occlusion / kernelSize); } I draw a full screen quad and pass Depth and Normal textures. Normals are in RGBA16F with the alpha channel reserved for the AO factor in the blur pass. I store depth in a non linear Depth buffer (32F) and recover the linear depth using: float linearDepth = projAB.y / (depth - projAB.x); where projAB.y is calculated as: and projAB.x as: These are derived from the glm::perspective(gluperspective) matrix. z_n and z_f are the near and far clip distance. As described in the link I posted on the top, the method creates samples in a hemisphere with higher distribution close to the center. It then uses random vectors from a texture to rotate the hemisphere randomly around the Z direction and finally orients it along the normal at the given pixel. Since the result is noisy, a blur pass follows the SSAO pass. Anyway, my position reconstruction doesn't seem to be wrong since I also tried doing the same but with the position passed from a texture instead of being reconstructed. I also tried playing with the Radius, noise texture size and number of samples and with different kinds of texture formats, with no luck. For some reason when changing the Radius, nothing changes. Does anyone have any suggestions? What could be going wrong?

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >