Search Results

Search found 67143 results on 2686 pages for 'complex data types'.

Page 679/2686 | < Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >

  • Need help with SQL table structure transformation

    - by Arnis L.
    I need to perform update/insert simultaneously changing structure of incoming data. Think about Shops that have defined work time for each day of the week. Hopefully, this might explain better what I'm trying to achieve: worktimeOrigin table: columns: shop_id day val data: 123 | "monday" | "9:00 AM - 18:00" 123 | "tuesday" | "9:00 AM - 18:00" 123 | "wednesday" | "9:00 AM - 18:00" shop table: columns: id worktimeDestination.id worktimeDestination table: columns: id monday tuesday wednesday My aim: I would like to insert data from worktimeOrigin table into worktimeDestination and specify appropriate worktimeDestination for shop. shop table data: 123 1 (updated) worktimeDestination table data: 1 | "9:00 AM - 18:00" | "9:00 AM - 18:00" | "9:00 AM - 18:00" (inserted) Any ideas how to do that?

    Read the article

  • how to remove a few lines from a Unicode registry file using batch commands in Windows?

    - by Cosmin
    Hi. I have a program who's generating some data in registry. I save it with "reg export HKCU\Software\ProgramName\Data data.reg" (Unicode format). I need to take it to other computer and import it there so the program from that computer could use the data. But I have to remove some text lines from data.reg. The text lines are easy to find because they contain some strings. Now I'm doing this manually (using Wordpad) every few days but maybe there is another way... Oh and I can't install other programs on these computers (the access is restricted) so I have to use batch/cmd files. What I tried so far: - redirecting the export to "con" but is visual only not in a variable; - using "for /F ..." but this works only with ANSI and removes blank lines. Can somebody please help me...? Thank you.

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • Spring bean creation via deserialization

    - by mdma
    Spring has many different ways of creating beans, but is it possible to create a bean by deserializing a resource? My application has a number of Components, and each manipulates a certain type of data. During test, the data object is instantiated directly and set directly on the component, e.g. component.setData(someDataObject). At runtime, the data is available as a serialized object and read in from the serialized stream by the component. Rather than having each component explicitly deserialize it's data from the stream, it would be more consistent and flexible to have Spring deserialize the data object from a resource. Is there a DeserializerBeanFactory or something similar?

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Zend_Form validation problem

    - by GrumpyCanuck
    I am having problems getting validation to work for a form built using Zend_Form. The idea is this: I have two dropdown. One is a list of players. The other is a list of free agents who play the same position as the player. I am using an onChange javascript callback to run some Ajax code that replaces the free agent list dropdown with a new one at the position of the player they've selected from the player dropdown. Now, perhaps this is the wrong way, but I built the form by creating an instance of Zend_Form and then creating all these setX methods that add elements to the form. My reasoning was that I wanted to display certain elements in specific places on the page, not just output $this-form on my template. The problem appears to be when I get the form post back, the validator seems to not know about the validation rule I set up for the free agent drop down. Here's some relevant code to look at. I'm a relative ZF n00b so feel free to tell me I am not doing things the ZF way if it leaps out at you. The action in the controller: public function indexAction() { if ($this->getRequest()->isPost()) { $form = new Baseball_Form_Transactions(); if ($form->isValid($this->_request->getPost())) { $data = $this->_request->getPost(); $leagueInfo = Doctrine::getTable('League')->findOneByShortName($data['shortLeagueName'])->toArray(); // Create the request top drop an existing player $transactionInfo = array( 'league_id' => $leagueInfo['id'], 'team_id' => $data['teamId'], 'player_id' => $data['players'], 'type' => 'drop', 'target_team_id' => 0, 'transaction_date' => date('Y-m-d H:m:s') ); $transaction = new Transaction(); $transaction->fromArray($transactionInfo); $transaction->save(); // Now we do the request to add a player $transactionInfo['team_id'] = 0; $transactionInfo['player_id'] = $data['freeAgents']; $transactionInfo['target_team_id'] = $data['teamId']; $transactionInfo['type'] = 'add'; $transaction = new Transaction(); $transaction->fromArray($transactionInfo); $transaction->save(); $this->_flashMessenger->addMessage('Added transaction'); } } $options = array( 'teamId' => $this->teamId, 'position' => 'C', 'leagueShortName' => $this->league ); $this->transactionForm->setMyPlayers($options); $this->transactionForm->setFreeAgents($options); $this->transactionForm->setTeamId($options); $this->transactionForm->setShortLeagueName($options); $this->view->transactionForm = $this->transactionForm; $this->view->messages = $this->_flashMessenger->getMessages(); $transaction = new Transaction(); $this->view->transactions = $transaction->byTeam($options); } Next we have the form itself public function setMyPlayers($options) { $data = Doctrine::getTable('Team')->find($options['teamId']); $players = array(); foreach ($data->Players->toArray() as $player) { $players[$player['id']] = "{$player['position']} - {$player['first_name']} {$player['last_name']}"; } $playersSelect = new Zend_Form_Element_Select( 'players', array( 'required' => true, 'label' => 'Players', 'multiOptions' => $players, ) ); $this->addElement($playersSelect); } public function setFreeAgents($options) { $q = Doctrine_Query::create() ->select('CONCAT(p.first_name, " ", p.last_name) as full_name, p.id, p.position') ->from('Player p') ->leftJoin('p.Teams t') ->leftJoin('t.League l ON l.short_name = ?', $options['leagueShortName']) ->where('t.id IS NULL') ->andWhere('p.position = ?', $options['position']) ->orderBy('p.last_name'); $q->setHydrationMode(Doctrine_Core::HYDRATE_ARRAY); $data = $q->execute(); $freeAgents = array(); foreach ($data as $player) { $freeAgents[$player['id']] = $player['full_name']; } $freeAgentsSelect = new Zend_Form_Element_Select( 'freeAgents', array( 'label' => 'Free Agents', 'multiOptions' => $freeAgents, 'size' => 15 ) ); $freeAgentsSelect->setRequired(true); $this->addElement($freeAgentsSelect); } public function setShortLeagueName($options) { $shortLeagueNameHidden = new Zend_Form_Element_Hidden( 'shortLeagueName', array('value' => $options['leagueShortName']) ); $this->addElement($shortLeagueNameHidden); } public function setTeamId($options) { $teamIdHidden = new Zend_Form_Element_Hidden( 'teamId', array('value' => $options['teamId']) ); $this->addElement($teamIdHidden); } There is no init or __construct() method in the form. My problem seems simple enough: reject the form contents as invalid if they have not selected someone from the free agent list. Right now, it sails through as valid. I've spent some considerable time searching online for an answer, and haven't been able to find it. Thanks in advance for any help.

    Read the article

  • Does Android support near real time push notification

    - by j pimmel
    I recently learned about the ability of iPhone apps to receive nearly instantaneous notifications to apps. This is provided in the form of push notifications, a bespoke protocol which keeps an always on data connection to the iPhone and messages binary packets to the app, which pops up alerts incredibly quickly, between 0.5 - 5 seconds from server app send to phone app response time. This is sent as data - rather than SMS - in very very small packets charged as part of the data plan not as incoming messages. I would like to know if using Android there is either a similar facility, or whether it's possible to implement something close to this using Android APIs. To clarify I define similar as: Not an SMS message, but some data driven solution As real time as is possible Is scalable - ie: as the server part of a mobile app, I could notify thousands of app instances in seconds I appreciate the app could be pull based, HTTP request/response style, but ideally I don't want to to be polling that heavily just to check for notification .. besides which it's like drip draining the data plan.

    Read the article

  • How does one capture H.323 voice traffic on a VOIP network?

    - by Chris Holmes
    What I am trying to do is capture the WAV data of a phone conversation on a VOIP network using SharpPCap/PCap.Net. We are using the H.323 recommendation and my understanding is that voice data is located in the RTP packets. However, there is no way to heuristically determine if a UDP packet is a RTP packet, so we have to do more work before we can capture the data. The H.323 recommendation apparently uses a lot of traffic on specific TCP ports to negotiate the call before the WAV data is sent via RTP. However, I am having very little luck determining what data is actually sent on those TCP ports, when it is sent, what the packets look like, how to handle it, etc. If anyone has any information on how to go about this I'd really appreciate it. My Google-Fu seems to be failing me on this one.

    Read the article

  • Convert Decimal to ASCII

    - by Dan Snyder
    I'm having difficulty using reinterpret_cast. Before I show you my code I'll let you know what I'm trying to do. I'm trying to get a filename from a vector full of data being used by a MIPS I processor I designed. Basically what I do is compile a binary from a test program for my processor, dump all the hex's from the binary into a vector in my c++ program, convert all of those hex's to decimal integers and store them in a DataMemory vector which is the data memory unit for my processor. I also have instruction memory. So When my processor runs a SYSCALL instruction such as "Open File" my C++ operating system emulator receives a pointer to the beginning of the filename in my data memory. So keep in mind that data memory is full of ints, strings, globals, locals, all sorts of stuff. When I'm told where the filename starts I do the following: Convert the whole decimal integer element that is being pointed to to its ASCII character representation, and then search from left to right to see if the string terminates, if not then just load each character consecutively into a "filename" string. Do this until termination of the string in memory and then store filename in a table. My difficulty is generating filename from my memory. Here is an example of what I'm trying to do: C++ Syntax (Toggle Plain Text) 1.Index Vector NewVector ASCII filename 2.0 240faef0 128123792 'abc7' 'a' 3.0 240faef0 128123792 'abc7' 'ab' 4.0 240faef0 128123792 'abc7' 'abc' 5.0 240faef0 128123792 'abc7' 'abc7' 6.1 1234567a 243225 'k2s0' 'abc7k' 7.1 1234567a 243225 'k2s0' 'abc7k2' 8.1 1234567a 243225 'k2s0' 'abc7k2s' 9. //EXIT LOOP// 10.1 1234567a 243225 'k2s0' 'abc7k2s' Index Vector NewVector ASCII filename 0 240faef0 128123792 'abc7' 'a' 0 240faef0 128123792 'abc7' 'ab' 0 240faef0 128123792 'abc7' 'abc' 0 240faef0 128123792 'abc7' 'abc7' 1 1234567a 243225 'k2s0' 'abc7k' 1 1234567a 243225 'k2s0' 'abc7k2' 1 1234567a 243225 'k2s0' 'abc7k2s' //EXIT LOOP// 1 1234567a 243225 'k2s0' 'abc7k2s' Here is the code that I've written so far to get filename (I'm just applying this to element 1000 of my DataMemory vector to test functionality. 1000 is arbitrary.): C++ Syntax (Toggle Plain Text) 1.int i = 0; 2.int step = 1000;//top->a0; 3.string filename; 4.char *temp = reinterpret_cast<char*>( DataMemory[1000] );//convert to char 5.cout << "a0:" << top->a0 << endl;//pointer supplied 6.cout << "Data:" << DataMemory[top->a0] << endl;//my vector at pointed to location 7.cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing 8.cout << "Characters:" << &temp << endl;//my temporary char array 9. 10.while(&temp[i]!=0) 11.{ 12. filename+=temp[i];//add most recent non-terminated character to string 13. i++; 14. if(i==4)//when 4 chatacters have been added.. 15. { 16. i=0; 17. step+=1;//restart loop at the next element in DataMemory 18. temp = reinterpret_cast<char*>( DataMemory[step] ); 19. } 20. } 21. cout << "Filename:" << filename << endl; int i = 0; int step = 1000;//top-a0; string filename; char *temp = reinterpret_cast( DataMemory[1000] );//convert to char cout << "a0:" << top-a0 << endl;//pointer supplied cout << "Data:" << DataMemory[top-a0] << endl;//my vector at pointed to location cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing cout << "Characters:" << &temp << endl;//my temporary char array while(&temp[i]!=0) { filename+=temp[i];//add most recent non-terminated character to string i++; if(i==3)//when 4 chatacters have been added.. { i=0; step+=1;//restart loop at the next element in DataMemory temp = reinterpret_cast( DataMemory[step] ); } } cout << "Filename:" << filename << endl; So the issue is that when I do the conversion of my decimal element to a char array I assume that 8 hex #'s will give me 4 characters. Why isn't this this case? Here is my output: C++ Syntax (Toggle Plain Text) 1.a0:0 2.Data:0 3.Data(1000):4428576 4.Characters:0x7fff5fbff128 5.Segmentation fault

    Read the article

  • Retrieving XML node from a path specified in an attribute value of another node

    - by Olivier PAYEN
    From this XML source : <?xml version="1.0" encoding="utf-8" ?> <ROOT> <STRUCT> <COL order="1" nodeName="FOO/BAR" colName="Foo Bar" /> <COL order="2" nodeName="FIZZ" colName="Fizz" /> </STRUCT> <DATASET> <DATA> <FIZZ>testFizz</FIZZ> <FOO> <BAR>testBar</BAR> <LIB>testLib</LIB> </FOO> </DATA> <DATA> <FIZZ>testFizz2</FIZZ> <FOO> <BAR>testBar2</BAR> <LIB>testLib2</LIB> </FOO> </DATA> </DATASET> </ROOT> I want to generate this HTML : <html> <head> <title>Test</title> </head> <body> <table border="1"> <tr> <td>Foo Bar</td> <td>Fizz</td> </tr> <tr> <td>testBar</td> <td>testFizz</td> </tr> <tr> <td>testBar2</td> <td>testFizz2</td> </tr> </table> </body> </html> Here is the XSLT I currently have : <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl"> <xsl:output method="html" indent="yes"/> <xsl:template match="/ROOT"> <html> <head> <title>Test</title> </head> <body> <table border="1"> <tr> <!--Generate the table header--> <xsl:apply-templates select="STRUCT/COL"> <xsl:sort data-type="number" select="@order"/> </xsl:apply-templates> </tr> <xsl:apply-templates select="DATASET/DATA" /> </table> </body> </html> </xsl:template> <xsl:template match="COL"> <!--Template for generating the table header--> <td> <xsl:value-of select="@colName"/> </td> </xsl:template> <xsl:template match="DATA"> <xsl:variable name="pos" select="position()" /> <tr> <xsl:for-each select="/ROOT/STRUCT/COL"> <xsl:sort data-type="number" select="@order"/> <xsl:variable name="elementName" select="@nodeName" /> <td> <xsl:value-of select="/ROOT/DATASET/DATA[$pos]/*[name() = $elementName]" /> </td> </xsl:for-each> </tr> </xsl:template> </xsl:stylesheet> It almost works, the problem I have is to retrieve the correct DATA node from the path specified in the "nodeName" attribute value of the STRUCT block.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • passing request params from jQuery to jersey service using json

    - by ccduga
    hi, im trying to POST (cross domain) some data to a jersey web service and retrieve a response (a GenericEntity object). The post successfully gets mapped to my jersey endpoint however when i pull the parameters from the request they are empty.. $ .ajax({ type: "POST", dataType: "application/json; charset=utf-8", url: jerseyNewUserUrl+'?jsoncallback=?', data:{'id':id, 'firstname':firstname,'lastname':lastname}, success: function(data, textStatus) { $('#jsonResult').html("some data: " + data.responseMsg); }, error: function ( XMLHttpRequest, textStatus, errorThrown){ alert('error'); } }); this is my jersey endpoint.. @POST @Produces( { "application/x-javascript", MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML }) @Path("/new") public JSONWithPadding addNewUser(@QueryParam("jsoncallback") @DefaultValue("empty") final String argJsonCallback, @QueryParam("id") final String argID, @QueryParam("firstname") final String argFirstName, @QueryParam("lastname") final String argLastName) is there something missing from my $.ajax call?

    Read the article

  • Excel > Microsoft Query > SQL Server > Multiple Parameters

    - by pojomx
    Hi, Im relatively new to sql server and excel/microsoft query, I have a query like this Select ...[data]...B1.b,B2.b,B3.b From TABLEA Inner join ( SELECT ---[data]...sum(...) as b From TABLEB WHERE Date between [startdate] and [enddate] ) as B1 Inner join ( SELECT ---[data]...sum(...) as b From TABLEB WHERE Date between [startdate-1week] and [enddate] ) as B2 Inner join ( SELECT ---[data]...sum(...) as b From TABLEB WHERE Date between [startdate-2weeks] and [enddate] ) as B3 Where Date between [startdate] and [enddate] It works, when i introduce the dates manually, but i need them to be "Dynamic" (introduced from excel) but, when I put the "?" (for parameters) on all the dates, it throws an error. "Invalid Parameter Number" :D How can i make this work, within excel? Im using SQL Server and Microsoft Query Connection Data.

    Read the article

  • Returned JSON is seemingly mixed up when using jQuery Ajax

    - by Niall Paterson
    I've a php script that has the following line: echo json_encode(array('success'=>'true','userid'=>$userid, 'data' => $array)); It returns the following: { "success": "true", "userid": "1", "data": [ { "id": "1", "name": "Trigger", "image": "", "subtitle": "", "description": "", "range1": null, "range2": null, "range3": null }, { "id": "2", "name": "DWS", "image": "", "subtitle": "", "description": "", "range1": null, "range2": null, "range3": null } ] } But when I call a jQuery ajax as below: $.ajax({ type: 'POST', url: 'url', crossDomain: true, data: {name: name}, success: function(success, userid, data) { if (success = true) { document.write(userid); document.write(success); } } }); The userid is 'success'. The actual success one works, its true. Is this malformed data being returned? Or is it simply my code? Thanks in advance, Niall

    Read the article

  • Provocative Tweets From the Dachis Social Business Summit

    - by Mike Stiles
    On June 20, all who follow social business and how social is changing how we do business and internal business structures, gathered in London for the Dachis Social Business Summit. In addition to Oracle SVP Product Development, Reggie Bradford, brands and thought leaders posed some thought-provoking ideas and figures. Here are some of the most oft-tweeted points, and our thoughts that they provoked. Tweet: The winners will be those who use data to improve performance.Thought: Everyone is dwelling on ROI. Why isn’t everyone dwelling on the opportunity to make their product or service better (as if that doesn’t have an effect on ROI)? Big data can improve you…let it. Tweet: High performance hinges on integrated teams that interact with each other.Thought: Team members may work well with each other, but does the team as a whole “get” what other teams are doing? That’s the key to an integrated, companywide workforce. (Internal social platforms can facilitate that by the way). Tweet: Performance improvements come from making the invisible visible.Thought: Many of the factors that drive customer behavior and decisions are invisible. Through social, customers are now showing us what we couldn’t see before…if we’re paying attention. Tweet: Games have continuous feedback, which is why they’re so engaging.  Apply that to business operations.Thought: You think your employees have an obligation to be 100% passionate and engaged at all times about making you richer. Think again. Like customers, they must be motivated. Visible insight that they’re advancing on their goals helps. Tweet: Who can add value to the data?  Data will tend to migrate to where it will be most effective.Thought: Not everybody needs all the data. One team will be able to make sense of, use, and add value to data that may be irrelevant to another team. Like a strategized football play, the data has to get sent to the spot on the field where it’s needed most. Tweet: The sale isn’t the light at the end of the tunnel, it’s the start of a new marketing cycle.Thought: Another reason the ROI question is fundamentally flawed. The sale is not the end of the potential return on investment. After-the-sale service and nurturing begins where the sales “victory” ends. Tweet: A dead sale is one that’s not shared.  People must be incentivized to share.Thought: Guess what, customers now know their value to you as marketers on your behalf. They’ll tell people about your product, but you’ve got to answer, “Why should I?” And you’ve got to answer it with something substantial, not lame trinkets. Tweet: Social user motivations are competition, affection, excellence and curiosity.Thought: Your followers will engage IF; they can get something for doing it, love your culture so much they want you to win, are consistently stunned at the perfection and coolness of your products, or have been stimulated enough to want to know more. Tweet: In Europe, 92% surveyed said they couldn’t care less about brands.Thought: Oh well, so much for loving you or being impressed enough with your products & service that they want you to win. We’ve got a long way to go. Tweet: A complaint is a gift.Thought: Our instinct where complaints are concerned is to a) not listen, b) dismiss the one who complains as a kook, c) make excuses, and d) reassure ourselves with internal group-think that they’re wrong and we’re right. It’s the perfect recipe for how to never, ever grow or get better. In a way, this customer cares more than you do. Tweet: 78% of consumers think peer recommendation is the best form of advertising.  Eventually, engagement is going to eat advertising.Thought: Why is peer recommendation best? Trust. If a friend tells me how great a movie was, I believe him. He has credibility with me. He’s seen it, and he could care less if I buy a ticket. He’s telling me it was awesome because he sincerely believes that it was.  That’s gold. Tweet: 86% of customers are willing to pay more for a better customer experience. Thought: This “how mad can we make our customers without losing them” strategy has to end. The customer experience has actual monetary value, money you’re probably leaving on the table. @mikestilesPhoto: stock.xchng

    Read the article

  • Best practice. Do I save html tags in DB or store the html entity value?

    - by Matt
    Hi Guys, I was wondering about which way i should do the following. I am using the tiny MCE wysiwyg editor which formats the users data with the right html tags. Now, i need to save this data entered into the editor into a database table. Should I encode the html tags to their corresponding entities when inserting into the DB, then when i get the data back from the table, not have the encode it for XSS purposes but I'd still have to use eval for the html tags to format the text. OR Do i save the html tags into the database, then when i get the data back from the database encode the html tags to their entities, but then as the tags will appear to the user, I'd have to use the eval function to actually format the data as it was entered. My thoughts are with the first option, I just wondered on what you guys thought.

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Ajax call in a jQuery plugin not working properly

    - by Saneef
    I'm trying to create a jQuery plugin, inside I need to do an AJAX call to load an xml. jQuery.fn.imagetags = function(options) { s = jQuery.extend({ height:null, width:null, url:false, callback:null, title:null, }, options); return this.each(function(){ obj = $(this); //Initialising the placeholder $holder = $('<div />') .width(s.width).height(s.height) .addClass('jimageholder') .css({ position: 'relative', }); obj.wrap($holder); $.ajax({ type: "GET", url: s.url, dataType: "xml", success:function(data){ initGrids(obj,data,s.callback,s.title); } , error: function(data) { alert("Error loading Grid data."); }, }); function initGrids(obj, data,callback,gridtitle){ if (!data) { alert("Error loading Grid data"); } $("gridlist gridset",data).each(function(){ var gridsetname = $(this).children("setname").text(); var gridsetcolor = ""; if ($(this).children("color").text() != "") { gridsetcolor = $(this).children("color").text(); } $(this).children("grid").each(function(){ var gridcolor = gridsetcolor; //This colour will override colour set for the grid set if ($(this).children("color").text() != "") { gridcolor = $(this).children("color").text(); } //addGrid(gridsetname,id,x,y,height,width) addGrid( obj, gridsetname, $(this).children("id").text(), $(this).children("x").text(), $(this).children("y").text(), $(this).children("height").text(), $(this).children("width").text(), gridcolor, gridtitle ); }); }); } function addGrid(obj,gridsetname,id,x,y,height,width,color,gridtitle){ //To compensate for the 2px border height-=4; width-=4; $grid = $('<div />') .addClass(gridsetname) .attr("id",id) .addClass('gridtag') .imagetagsResetHighlight() .css({ "bottom":y+"px", "left":x+"px", "height":height+"px", "width":width+"px", }); if(gridtitle != null){ $grid.attr("title",gridtitle); } if(color != ""){ $grid.css({ "border-color":color, }); } obj.after($grid); } }); } The above plugin I bind with 2 DOM objects and loads two seperate XML files but the callback function is run only on the last DOM object using both loaded XML files. How can I fix this, so that the callback is applied on the corresponding DOMs. Is the above ajax call is correct? Sample usage: <script type="text/javascript"> $(function(){ $(".romeo img").imagetags({ height:500, width:497, url: "sample-data.xml", title: "Testing...", callback:function(id){ console.log(id); }, }); }); </script> <div class="padding-10 min-item background-color-black"> <div class="romeo"><img src="images/samplecontent/test_500x497.gif" alt="Image"> </div> </div> <script type="text/javascript"> $(function(){ $(".romeo2 img").imagetags({ height:500, width:497, url: "sample-data2.xml", title: "Testing...", callback:function(id){ console.log(id); }, }); }); </script> <div class="padding-10 min-item background-color-black"> <div class="romeo2"><img src="images/samplecontent/test2_500x497.gif" alt="Image"> </div> </div> Here is the sample XML data: <?xml version="1.0" encoding="utf-8"?> <gridlist> <gridset> <setname>gridset4</setname> <color>#00FF00</color> <grid> <color>#FF77FF</color> <id>grid2-324</id> <x>300</x> <y>300</y> <height>60</height> <width>60</width> </grid> </gridset> <gridset> <setname>gridset3</setname> <color>#00FF00</color> <grid> <color>#FF77FF</color> <id>grid2-212</id> <x>300</x> <y>300</y> <height>100</height> <width>100</width> </grid> <grid> <color>#FF77FF</color> <id>grid2-1212</id> <x>200</x> <y>10</y> <height>200</height> <width>10</width> </grid> </gridset> </gridlist>

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • jquery ajax request is Forbidden in FF 3.6.2 and IE. How to fix (any workaround)?

    - by 1gn1ter
    <script type="text/javascript"> $(function () { $("select#oblast").change(function () { var oblast_id = $("#oblast > option:selected").attr("value"); $("#Rayondiv").hide(); $.ajax({ type: "GET", contentType: "application/json", url: "http://site.com/Regions.aspx/FindGorodByOblastID/", data: 'oblast_id=' + oblast_id, dataType: "json", success: function (data) { if (data.length > 0) { var options = ''; for (p in data) { var gorod = data[p]; options += "<option value='" + gorod.Id + "'>" + gorod.Name + "</option>"; } $("#gorod").removeAttr('disabled').html(options); } else { $("#gorod").attr('disabled', false).html(''); } } }); }); }); </script>

    Read the article

  • How to set parameters in Python zlib module

    - by fagricipni
    I want to write a Python program that makes PNG files. My big problem is with generating the CRC and the data in the IDAT chunk. Python 2.6.4 does have a zlib module, but there are extra settings needed. The PNG specification REQUIRES the IDAT data to be compressed with zlib's deflate method with a window size of 32768 bytes, but I can't find how to set those parameters in the Python zlib module. As for the CRC for each chunk, the zlib module documentation indicates that it contains a CRC function. I believe that calling that CRC function as crc32(data,-1) will generate the CRC that I need, though if necessary I can translate the C code given in the PNG specification. Note that I can generate the rest of the PNG file and the data that is to be compressed for the IDAT chunk, I just don't know how to properly compress the image data for the IDAT chunk after implementing the initial filtering step.

    Read the article

  • How can we order a column as int using hibernate criteria API?

    - by Satya
    Hi I want to fetch the data form data base using hibernate Criteria API. That data should be ordered by some column as number. This column is defined as varchar in DB. But I have to fetch as numberic. I am facing problem using criteria API as it is ordering like string onyly. Ex: I am getting data like 9, 8, 7, 6, 5, 4, 3, 2, 1,10 but i want data as 10,9,8,7,6,5,4,3,2,1 Is there any Hibernate methods to covert varchar to number like convert("some column",int ) or cast("some column",int) ?

    Read the article

  • FileReference and HttpService Browse Image Modify it then Upload it

    - by user177787
    Hello, I am trying to do an image uploader, user can: - browse local file with button.browse - select one and save it as a FileReference. - then we do FileReference.load() then bind the data to our image control. - after we make a rotation on it and change the data of image. - and to finish we upload it to a server. To change the data of image i get the matrix of the displayed image and transform it then i re-use the new matrix and bind it to my old image: private function TurnImage():void { //Turn it var m:Matrix = _img.transform.matrix; rotateImage(m); _img.transform.matrix = m; } Now the mater is that i really don't know how to send the data as a file to my server cause its not stored in the FileReference and data inside FileReference is readOnly so we can't change it or create a new, so i can't use .upload();. Then i tried HttpService.send but i can't figure out how you send a file and not a mxml.

    Read the article

  • XmlDocument.InnerXml is null, but InnerText is not

    - by Adam Neal
    I'm using XmlDocument and XmlElement to build a simple (but large) XML document that looks something like: <Widgets> <Widget> <Stuff>foo</Stuff> <MoreStuff>bar</MoreStuff>...lots more child nodes </Widget> <Widget>...lots more Widget nodes </Widgets> My problem is that when I'm done building the XML, the XmlDocument.InnerXml is null, but the InnerText still shows all the text of all the child nodes. Has anyone ever seen a problem like this before? What kind of input data would cause these symptoms? I expected the XmlDocument to just throw an exception if it was given bad data. Note: I'm pretty sure this is related to the input data as I can only reproduce it against certain data sets. I also tried escaping the data with SecurityElement.Escape but it made no difference.

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >