Search Results

Search found 28782 results on 1152 pages for 'input language'.

Page 613/1152 | < Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • what are the problems in game development that requires scientific research? [on hold]

    - by Anmar
    I been into Game Development for approximately 2 years for now mostly prototype development and testing ideas. Im in a point of my carrier where I am in a need to publish a research paper I would love to start doing research about game development however my lack of experience in actual game development in a commercial set of environment brings me into Game development in stackexchange My question is for the experience game developers out there What are the problems related to software engineering that you have faced or your team faced while developing games? Example Problems ? The lack of a strong technique for Fun detection in a game in an early stage of development A strong tailored Software Development Life Cycle for game development Agile methodology as a game development methodology Narrowing the goals gap between team members (Editors, Story Designers, Programmers, 3D artists, 2D Artists) - Community Suggestions Indie game marketing requirements for success by Yakyb Any problems you could define it I would be more than happy to take it into consideration for future research. My experience and work mostly involve process related basically SDLC (Waterfall, Spiral, Agile, RUP .Etc) Thank you for any input.

    Read the article

  • jQuery Datatable in MVC &hellip; extended.

    - by Steve Clements
    There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!! Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there. You potentially spend a long time getting everything hooked together when you just don’t need it. That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality). What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc.  Take a look here… http://www.datatables.net I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require.  Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it! One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already… I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out. So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event. Initialising my dataTable ( s )… $(document).ready(function () {   var dTable = $(".tdata").dataTable({ "bPaginate": false, "bLengthChange": false, "bFilter": true, "bSort": true, "bInfo": false, "bAutoWidth": true, "sScrollY": "400px" });   My extension to the API to give me the resizing….   // ********************************************************************** // jQuery dataTable API extension to resize grid and adjust column sizes // $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) { var id = oSettings.nTable.id; var dt = $("#" + id); var top = dt.position().top; var winHeight = $(document).height(); var remain = (winHeight - top) - 83; dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;"); this.fnAdjustColumnSizing(); } This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later! You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required. Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height. The –83 is to give me the space at the bottom i require for fixed footer!   Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.   $(document).ready(function () { var oTable; $("#tabs").tabs({ "show": function (event, ui) { oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable(); if (oTable.length > 0) { oTable.fnSetHeightToBottom(); } } }); $(window).bind("resize", function () { oTable.fnSetHeightToBottom(); }); }); And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful. I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.

    Read the article

  • HTTP Basic Auth Protected Services using Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    With Oracle JDeveloper 11g (11.1.1.4.0) one can now create Web Service Data Control for services which are protected with HTTP Basic Authentication.So when you provide such a service to the Data Control Wizard, a dialog pops up prompting you to entry the authentication details:After you give the details, you can proceed with the creation of Data Control.Once the Data Control is created, you can use the WSDC Tester to quickly test the service.In this case, since the service is protected, we need to first edit the connection to provide username details:Enter the authentication details against username and password. Once done, select DataControl.dcx and using the context menu, select 'Run'. This will bring up the Tester.On the Tester, select the Service Node and using context menu pick 'Operations'. This will bring up the methods which you can test:Now you can pick a method, provide the input parameters and hit execute to see the results.

    Read the article

  • c++ write own xml parser vs using tinyxml

    - by AdityaGameProgrammer
    Hi , I am currently in a task to generate an XML file for an srt text file containing timestamps and corresponding text. To generate an exe file which accepts file name input and outputs the relevant XML file to be used as part of an automated script. Is it Advisable to use Tinyxml for this? Is this a very simple task that can be done with minimal programming? Is this one of those things which are very basic to c++ programmers? reason i am asking this is I have recently made a shift into c++ programming after over 3 years of action script development. Edit: your comments regarding this are very much appreciated what's the easiest way to generate xml in c++?

    Read the article

  • Validating Data Using Data Annotation Attributes in ASP.NET MVC

    - by bipinjoshi
    The data entered by the end user in various form fields must be validated before it is saved in the database. Developers often use validation HTML helpers provided by ASP.NET MVC to perform the input validations. Additionally, you can also use data annotation attributes from the System.ComponentModel.DataAnnotations namespace to perform validations at the model level. Data annotation attributes are attached to the properties of the model class and enforce some validation criteria. They are capable of performing validation on the server side as well as on the client side. This article discusses the basics of using these attributes in an ASP.NET MVC application.http://www.bipinjoshi.net/articles/0a53f05f-b58c-47b1-a544-f032f5cfca58.aspx       

    Read the article

  • how do you document your development process?

    - by David
    My current state is a mixture of spreadsheets, wikis, documents, and dated folders for my input/configuration and output files and bzr version control for code. I am relatively new to programming that requires this level of documentation, and I would like to find a better, more coherent approach. update (for clarity): My inputs are data used to generate configuration files with parameter values and my outputs are analyses of model predictions. I would really like to have an approach that allows me to associate particular configuration(s) with particular outputs, so that I can ask questions of my documentation such as "what causes over/under estimates?" or "what causes error 'X'"?

    Read the article

  • The countdown for ‘In Touch’ has begun!

    - by Julien Haye
    The Oracle 'In Touch' PartnerCast is just a week away from going live, so if you haven’t registered yet, what are you waiting for?! Registration is quick and easy, so click here to register and ensure you stay informed with the latest from the Oracle PartnerNetwork.  'In Touch' relies on your input, so let David Callaghan, Senior Vice President EMEA Alliances and Channels, know your thoughts and comments via the player consol, by emailing [email protected] or on twitter using the hashtag #DCpickme. The cast will go live on Tuesday 29th October from 10:30am UK / 11:30am CET with studio guests Will O'Brien, VP Alliances & Channels, UK & Ireland, and Markus Reischl, Senior Director and Sales Leader EMEA Strategic Alliances, answering your questions on Oracle Storage and Business Intelligence. To find out more information about the cast, including the full line up, please visit the 'In Touch' webpage here.

    Read the article

  • can't load big files to server with php [closed]

    - by yozhik
    Hi all! I can't load big files to server. The problem is in that file $_FILES["filename"]["tmp_name"] is empty if file a little more bigger then 2mb. I tried to change variables in php.ini upload_max_filesize = 700M post_max_size = 16M but not working to. Also tried to add this variables to my .httaccess file - but 500 error appears. Error code while uploading=1. UPLOAD_ERR_INI_SIZE Value: 1; The uploaded file exceeds the upload_max_filesize directive in php.ini. Here is my uppload.php page, please anwer what I doing wrong? Thanx! <?php if(strlen($_FILES["filename"]["name"])) { $folder = "uploads/"; echo $folder; $error = ""; if($_FILES["filename"]["size"] > 1024*700*1024) { $error .= "<b><p class=ErrorMessage>?????? ????? ????????? 5Mb</p></b><br>"; header("Location: upload.php?error=".$error, true, 303 ); } if(!file_exists($folder.="hh/")) { if(!mkdir($folder, 0700)) $error .= "<b><p class=ErrorMessage>Folder not created</p></b><br>"; } //echo "<br>".$_FILES["filename"]["tmp_name"]."<br>"; echo $folder.$_FILES["filename"]["name"]."<br>"; echo $_FILES["filename"]["error"]."<br>"; if(move_uploaded_file($_FILES["filename"]["tmp_name"], $folder.$_FILES["filename"]["name"])) { echo("???? ??????? ???????? <br>"); echo("?????????????? ?????: <br>"); echo("??? ?????: "); echo($_FILES["filename"]["name"]); echo("<br>?????? ?????: "); echo($_FILES["filename"]["size"]); echo("<br>??????? ??? ????????: "); echo($folder.=$_FILES["filename"]["name"]); echo("<br>??? ?????: "); echo($_FILES["filename"]["type"]); } else { $error .= "<b><p class=ErrorMessage>?????? ???????? ?????</p></b><br>"; } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>???????? ??? ????????</title> </head> <body> <?php if(isset($_REQUEST["error"])) { echo $_REQUEST["error"]; } ?> <h2><p><b> ????? ??? ???????? ?????? </b></p></h2> <form action="upload.php" method="post" enctype="multipart/form-data"> <input type="file" name="filename" READONLY><br> <input name="Upload" type="submit" value="Upload"><br> </form> </body> </html>

    Read the article

  • How to move a sprite automatically using a physicsHandler in Andengine?

    - by shailenTJ
    I use a DigitalOnScreenControl (knob with a four-directional arrow control) to move the entity and the entity which is bound to a physicsHandler. physicsHandler.setEntity(sprite); sprite.registerUpdateHandler(physicsHandler); From the DigitalOnScreenControl, I know which direction I want my sprite to move. Inside its overridden onControlChange function, I call a function animateSprite that checks which direction I chose. Based on the direction, I animate my sprite differently. PROBLEM: I want to automatically move the sprite to a specific location on the scene, say at coordinates (207, 305). My sprite is at (100, 305, which means it has to move down by 107 pixels. How do I tell the physicsHandler to move the sprite down by 107 pixels? My animateSprite method will take care of animating the sprite's downward motion. Thank you for your input!

    Read the article

  • SQL SERVER – Integration Services Balanced Data Distributor – SSIS Balanced Data Distributor

    - by pinaldave
    Microsoft SSIS Balanced Data Distributor (BDD) is a new SSIS transform. This transform takes a single input and distributes the incoming rows to one or more outputs uniformly via multithreading. The transform takes one pipeline buffer worth of rows at a time and moves it to the next output in a round robin fashion. It’s balanced and synchronous so if one of the downstream transforms or destinations is slower than the others, the rest of the pipeline will stall so this transform works best if all of the outputs have identical transforms and destinations. Download SQL Server Integration Services Balanced Data Distributor Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can't get TRIM test to work

    - by Matthew Marcus
    So I'm attempting to install TRIM using the walkthrough here: How to enable TRIM? But everytime I attempt to run the hdparm command, I get the following when I try to run it w/ sda: reading sector 5805056: FAILED: Input/output error and I get this when running it with sda1: /dev/sda1: Device /dev/sda1 has non-zero LBA starting offset of 2048. Please use an absolute LBA with the /dev/ entry for the full device, rather than a partition name. /dev/sda1 is probably a partition of /dev/sda (?) The absolute LBA of sector 5807104 from /dev/sda1 should be 5809152 Aborting. I'm running Natty in a VBox on Windows 7. Someone PLEASE help.. I keep getting this "consistency check" message on boot of my machine and I think it's because Ubuntu is writing to the same sectors on the VHD too much.. need to get trim working on this thing.. Thanks.

    Read the article

  • Ubuntu Software Center 12.04 Does not install Software

    - by Lester Miller
    I have just loaded Ubuntu 12.04 on a computer. I am new to Ubuntu. I am using an automatic proxy server. When I pick a software package to install the program I input my password. The progress icon displays for a few seconds and then it stops. I tried to load different programs and always the same problem. I can go out on the network through firefox so I know I have a network connection. I do not see any errors or anything. Not sure what to do. I am thinking about switching over to SUSE

    Read the article

  • When should method overloads be refactored?

    - by Ben Heley
    When should code that looks like: DoThing(string foo, string bar); DoThing(string foo, string bar, int baz, bool qux); ... DoThing(string foo, string bar, int baz, bool qux, string more, string andMore); Be refactored into something that can be called like so: var doThing = new DoThing(foo, bar); doThing.more = value; doThing.andMore = otherValue; doThing.Go(); Or should it be refactored into something else entirely? In the particular case that inspired this question, it's a public interface for an XSLT templating DLL where we've had to add various flags (of various types) that can't be embedded into the string XML input.

    Read the article

  • Adding VFACE semantic causes overlapping output semantics error

    - by user1423893
    My pixel shader input is a follows struct VertexShaderOut { float4 Position : POSITION0; float2 TextureCoordinates : TEXCOORD0; float4 PositionClone : TEXCOORD1; // Final position values must be cloned to be used in PS calculations float3 Normal : TEXCOORD2; //float3x3 TBN : TEXCOORD3; float CullFace : VFACE; // A negative value faces backwards (-1), while a positive value (+1) faces the camera (requires ps_3_0) }; I'm using ps_3_0 and I wish to utilise the VFACE semantic for correct lighting of normals depending on the cull mode. If I add the VFACE semantic then I get the following errors: error X5639: dcl usage+index: position,0 has already been specified for an output register error X4504: overlapping output semantics Why would this occur? I can't see why there would be too much data.

    Read the article

  • What to send to server in real time FPS game?

    - by syloc
    What is the right way to tell the position of our local player to the server? Some documents say that it is better to send the inputs whenever they are produced. And some documents say the client sends its position in a fixed interval. With the sending the inputs approach: What should I do if the player is holding down the direction keys? It means I need to send a package to the server in every frame. Isn't it too much? And there is also the rotation of the player from the mouse input. Here is an example: http://www.gabrielgambetta.com/fpm_live.html What about sending the position in fixed interval approach. It sends too few messages to the server. But it also reduces responsiveness. So which way is better?

    Read the article

  • How to set selinux?

    - by Enrique Videni
    I installed selinux first, I set SELINUX=enforcing instead of its original value, SELINUX=permissive in /etc/selinux/config, then I reboot my computer. I waited for some time, but it stopped, I rebooted again and it can not go into the system anymore so I restored the setting. I tried to run seinfo command in a terminal, but it output some errors below: ERROR: policydb version 26 does not match my version range 15-24 ERROR: Unable to open policy /etc/selinux/ubuntu/policy/policy.26. ERROR: Input/output error It seems that there is a little difference on how to start up selinux between CentOS and Ubuntu, can you help me configure selinux in Ubuntu?

    Read the article

  • Using Telerik Reporting in a WPF application

    Now that Telerik Reporting provides WPF support, let's see how to use it (a video is also available on Getting Started with the WPF viewer): Creating the application Install RadControls for WPF 2010 Q1 SP1 (download | release notes). Install the corresponding Telerik Reporting version. Create a new WPF application project in Visual Studio Add references to the following Telerik RadControls for WPF assemblies: Telerik.Windows.Controls Telerik.Windows.Controls.Input Telerik.Windows.Controls.Navigation Telerik.Windows.Data NOTE: It is possible that the RadControls for WPF assemblies have a greater version than the one against which the WPF Report Viewer control was built. In this case you have to add appropriate assembly binding redirects (see Binding Redirects bellow). Drag and drop the ReportViewer control from the toolbox in the WPF window. If the ReportViewer is not available in the toolbox, you can add it using the instructions from the How to add the WPF ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why does setting a geometry shader cause my sprites to vanish?

    - by ChaosDev
    My application has multiple screens with different tasks. Once I set a geometry shader to the device context for my custom terrain, it works and I get the desired results. But then when I get back to the main menu, all sprites and text disappear. These sprites don't dissappear when I use pixel and vertex shaders. The sprites are being drawn through D3D11, of course, with specified view and projection matrices as well an input layout, vertex, and pixel shader. I'm trying DeviceContext->ClearState() but it does not help. Any ideas? void gGeometry::DrawIndexedWithCustomEffect(gVertexShader*vs,gPixelShader* ps,gGeometryShader* gs=nullptr) { unsigned int offset = 0; auto context = mp_D3D->mp_Context; //set topology context->IASetPrimitiveTopology(m_Topology); //set input layout context->IASetInputLayout(mp_inputLayout); //set vertex and index buffers context->IASetVertexBuffers(0,1,&mp_VertexBuffer->mp_Buffer,&m_VertexStride,&offset); context->IASetIndexBuffer(mp_IndexBuffer->mp_Buffer,mp_IndexBuffer->m_DXGIFormat,0); //send constant buffers to shaders context->VSSetConstantBuffers(0,vs->m_CBufferCount,vs->m_CRawBuffers.data()); context->PSSetConstantBuffers(0,ps->m_CBufferCount,ps->m_CRawBuffers.data()); if(gs!=nullptr) { context->GSSetConstantBuffers(0,gs->m_CBufferCount,gs->m_CRawBuffers.data()); context->GSSetShader(gs->mp_D3DGeomShader,0,0);//after this call all sprites disappear } //set shaders context->VSSetShader( vs->mp_D3DVertexShader, 0, 0 ); context->PSSetShader( ps->mp_D3DPixelShader, 0, 0 ); //draw context->DrawIndexed(m_indexCount,0,0); } //sprites void gSpriteDrawer::Draw(gTexture2D* texture,const RECT& dest,const RECT& source, const Matrix& spriteMatrix,const float& rotation,Vector2d& position,const Vector2d& origin,const Color& color) { VertexPositionColorTexture* verticesPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; unsigned int TriangleVertexStride = sizeof(VertexPositionColorTexture); unsigned int offset = 0; float halfWidth = ( float )dest.right / 2.0f; float halfHeight = ( float )dest.bottom / 2.0f; float z = 0.1f; int w = texture->Width(); int h = texture->Height(); float tu = (float)source.right/(w); float tv = (float)source.bottom/(h); float hu = (float)source.left/(w); float hv = (float)source.top/(h); Vector2d t0 = Vector2d( hu+tu, hv); Vector2d t1 = Vector2d( hu+tu, hv+tv); Vector2d t2 = Vector2d( hu, hv+tv); Vector2d t3 = Vector2d( hu, hv+tv); Vector2d t4 = Vector2d( hu, hv); Vector2d t5 = Vector2d( hu+tu, hv); float ex=(dest.right/2)+(origin.x); float ey=(dest.bottom/2)+(origin.y); Vector4d v4Color = Vector4d(color.r,color.g,color.b,color.a); VertexPositionColorTexture vertices[] = { { Vector3d( dest.right-ex, -ey, z),v4Color, t0}, { Vector3d( dest.right-ex, dest.bottom-ey , z),v4Color, t1}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t2}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t3}, { Vector3d( -ex, -ey , z),v4Color, t4}, { Vector3d( dest.right-ex, -ey , z),v4Color, t5}, }; auto mp_context = mp_D3D->mp_Context; // Lock the vertex buffer so it can be written to. mp_context->Map(mp_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); // Get a pointer to the data in the vertex buffer. verticesPtr = (VertexPositionColorTexture*)mappedResource.pData; // Copy the data into the vertex buffer. memcpy(verticesPtr, (void*)vertices, (sizeof(VertexPositionColorTexture) * 6)); // Unlock the vertex buffer. mp_context->Unmap(mp_vertexBuffer, 0); //set vertex shader mp_context->IASetVertexBuffers( 0, 1, &mp_vertexBuffer, &TriangleVertexStride, &offset); //set texture mp_context->PSSetShaderResources( 0, 1, &texture->mp_SRV); //set matrix to shader mp_context->UpdateSubresource(mp_matrixBuffer, 0, 0, &spriteMatrix, 0, 0 ); mp_context->VSSetConstantBuffers( 0, 1, &mp_matrixBuffer); //draw sprite mp_context->Draw( 6, 0 ); }

    Read the article

  • Which Python Framework and CMS coming from PHP - Codeigniter+ExpresionEngine?

    - by Joshua Fricke
    We are currently developing most of our applications in PHP using CodeIgniter (CI) and ExpressionEngine (EE) and are looking to try our hands at Python. So we are looking for a Framework and ideally a CMS that work well together like the CI+EE combo does. Have done a bit of research, it looks like these are some good suggestions (though we are not limiting to these): Frameworks - http://wiki.python.org/moin/WebFrameworks Django Web2py CMS - http://wiki.python.org/moin/ContentManagementSystems Below picked because they are developed with a Framework (my only frame of reference using CI+EE) Merengue Mezzanine Django CMS Input would be great in helping us decide.

    Read the article

  • Can't Boot, video drivers or x-server problem?

    - by ZacharyH
    I was uninstalling some programs that I installed to try and get my iPod touch working with Ubuntu (I gave up on that) when ubuntu just crashed. Now after I choose ubuntu in GRUB, it gives me a screen that says "Ubuntu is running in low-graphics mode: your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself" It was working just fine before I started to uninstall those programs. I think that I might have uninstalled something necessary to the system. If I click OK on the screen, it gives me options to reconfigure, troubleshoot, exit to console, or restart X. But no matter what I choose I still can't boot into ubuntu - I get stuck looking at the splash screen which stalls forever. I was receiving support from one of my mate's and he was doing something with the LiveCD, and now the message doesn't pop up any more, I just get stuck at a never ending splash screen. Any help would be appreciated, thanks!

    Read the article

  • Ghost Incognito Automatically Loads Incognito Mode Based on Domain

    - by Jason Fitzpatrick
    Chrome: Ghost Incognito mode is a simple Chrome extension that automatically launches Incognito mode on a domain-by-domain basis. If you routinely visit the same sites using Incognito Mode, Ghost Incognito allows you to flag domains. By default it turns on Incognito for all .XXX domains and, once you select some domains, for any that you specify. Thus if you flag angrybirds.com, as we did for our test run of the app, every time you visit angrybirds.com or a sub-domain there of such as shop.angrybirds.com, you’ll be automatically directed to a new Incognito tab–no input from you necessary. Ghost Incognito is free, Chrome only. Ghost Incognito [via Addictive Tips] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • No anti-aliasing with Xmonad

    - by Leon
    I'm looking into Xmonad. One problem I'm having is that most of my applications in Xmonad don't have anti-aliasing. For example gnome-terminal & evolution. I have this in my .Xresources: Xft.dpi: 96 Xft.lcdfilter: lcddefault Xft.antialias: true Xft.autohint: true Xft.hinting: true Xft.hintstyle: hintfull Xft.hintstyle: slight Xft.rgba: rgb And this in my .gtkrc-2.0: gtk-theme-name="Ambiance" gtk-icon-theme-name="ubuntu-mono-dark" gtk-font-name="Sans 10" gtk-cursor-theme-name="DMZ-White" gtk-cursor-theme-size=0 gtk-toolbar-style=GTK_TOOLBAR_BOTH gtk-toolbar-icon-size=GTK_ICON_SIZE_LARGE_TOOLBAR gtk-button-images=1 gtk-menu-images=1 gtk-enable-event-sounds=1 gtk-enable-input-feedback-sounds=1 gtk-xft-antialias=1 gtk-xft-hinting=1 gtk-xft-hintstyle="hintfull" gtk-xft-rgba="rgb" include "/home/leon/.gtkrc-2.0.mine" But I still have no anti-aliasing. When I launch gnome-settings-daemon I do get anti-aliasing. But I don't want to run gnome-settings-daemon. What could be the problem? I'm running Ubuntu 12.04 Desktop.

    Read the article

  • ASP.NET Web Forms Extensibility: Control Adapters

    - by Ricardo Peres
    All ASP.NET controls from version 2.0 can be associated with a control adapter. A control adapter is a class that inherits from ControlAdapter and it has the chance to interact with the control(s) it is targeting so as to change some of its properties or alter its output. I talked about control adapters before and they really a cool feature. The ControlAdapter class exposes virtual methods for some well known lifecycle events, OnInit, OnLoad, OnPreRender and OnUnload that closely match their Control counterparts, but are fired before them. Because the control adapter has a reference to its target Control, it can cast it to its concrete class and do something with it before its lifecycle events are actually fired. The adapter is also notified before the control is rendered (BeginRender), after their children are renderes (RenderChildren) and after itself is rendered (Render): this way the adapter can modify the control’s output. Control adapters may be specified for any class inheriting from Control, including abstract classes, web server controls and even pages. You can, for example, specify a control adapter for the WebControl and UserControl classes, but, curiously, not for Control itself. When specifying a control adapter for a page, it must inherit from PageAdapter instead of ControlAdapter. The adapter for a control, if specified, can be found on the protected Adapter property, and for a page, on the PageAdapter property. The first use of control adapters that came to my attention was for changing the output of standard ASP.NET web controls so that they were more based on CSS and less on HTML tables: it was the CSS Friendly Control Adapters project, now available at http://code.google.com/p/aspnetcontroladapters/. They are interesting because you specify them in one location and they apply anywhere a control of the target type is created. Mind you, it applies to controls declared on markup as well as controls created by code with the new operator. So, how do you use control adapters? The most usual way is through a browser definition file. In it, you specify a set of control adapters and their target controls, for a given browser. This browser definition file is a XML file with extension .Browser, and can either be global (%WINDIR%\Microsoft.NET\Framework64\vXXXX\Config\Browsers) or local to the web application, in which case, it must be placed inside the App_Browsers folder at the root of the web site. It looks like this: 1: <browsers> 2: <browser refID="Default"> 3: <controlAdapters> 4: <adapter controlType="System.Web.UI.WebControls.TextBox" adapterType="MyNamespace.TextBoxAdapter, MyAssembly" /> 5: </controlAdapters> 6: </browser> 7: </browsers> A browser definition file targets a specific browser, so you can have different definitions for Chrome, IE, Firefox, Opera, as well as for specific version of each of those (like IE8, Firefox3). Alternatively, if you set the target to Default, it will apply to all. The reason to pick a specific browser and version might be, for example, in order to circumvent some limitation present in that specific version, so that on markup you don’t need to be concerned with that. Another option is through the the current Browser object of the request: 1: this.Context.Request.Browser.Adapters.Add(typeof(TextBox).FullName, typeof(TextBoxAdapter).FullName); This must go very early on the page lifecycle, for example, on the OnPreInit event, or even on Application_Start. You have to specify the full class name for both the target control and the adapter. Of course, you have to do this for every request, because it won’t be persisted. As an example, you may know that the classic TextBox control renders an HTML input tag if its TextMode is set to SingleLine and a textarea if set to MultiLine. Because the textarea has no notion of maximum length, unlike the input, something must be done in order to enforce this. Here’s a simple suggestion: 1: public class TextBoxControlAdapter : ControlAdapter 2: { 3: protected TextBox Target 4: { 5: get 6: { 7: return (this.Control as TextBox); 8: } 9: } 10:  11: protected override void OnLoad(EventArgs e) 12: { 13: if ((this.Target.MaxLength > 0) && (this.Target.TextMode == TextBoxMode.MultiLine)) 14: { 15: if (this.Target.Page.ClientScript.IsClientScriptBlockRegistered("TextBox_KeyUp") == false) 16: { 17: if (this.Target.Page.ClientScript.IsClientScriptBlockRegistered(this.Target.Page.GetType(), "TextBox_KeyUp") == false) 18: { 19: String script = String.Concat("function TextBox_KeyUp(sender) { if (sender.value.length > ", this.Target.MaxLength, ") { sender.value = sender.value.substr(0, ", this.Target.MaxLength, "); } }\n"); 20:  21: this.Target.Page.ClientScript.RegisterClientScriptBlock(this.Target.Page.GetType(), "TextBox_KeyUp", script, true); 22: } 23:  24: this.Target.Attributes["onkeyup"] = "TextBox_KeyUp(this)"; 25: } 26: } 27: 28: base.OnLoad(e); 29: } 30: } What it does is, for every TextBox control, if it is set for multi line and has a defined maximum length, it injects some JavaScript that will filter out any content that exceeds this maximum length. This will occur for any TextBox that you may have on your site, or any class that inherits from it. You can use any of the previous options to register this adapter. Stay tuned for more ASP.NET Web Forms extensibility tips!

    Read the article

< Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >