Search Results

Search found 77599 results on 3104 pages for 'test data'.

Page 787/3104 | < Previous Page | 783 784 785 786 787 788 789 790 791 792 793 794  | Next Page >

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Problem with glaux.h locating

    - by rodnower
    Hello, I try to compile code, that beggins with: #include<stdlib.h> #include<GL/gl.h> #include<glaux.h> with command: cc -o test test.c -I/usr/local/include -L/usr/local/lib -lMesaaux -lMesatk -lMesaGL -lXext -lX11 -lm But one of errors I got is: test.c:3:18: error: glaux.h: No such file or directory Then I try: yum provides glaux.h but yum find anything. Before all I installed Mesa with: yum install mesa* So, can anyone tell me from where I can get the header file? Thank you for ahead.

    Read the article

  • TypeInitilazationException When Getting an NHibernate Session

    - by Paul Johnson
    I’ve run into what appears to be an NHibernate config problem. Basically, I ran up a simple proof of concept persistence integration test using NUnit, the test simply querys an Oracle database and successfully returns the last record received by the underlying table. However, when the assemblies are taken out of the NUnit test environment and deployed as they would be for an actual application build, my call for an NHibernate session results in a ‘TypeInitializationException’ whilst executing the code line: sessionFactory = New Configuration().Configure().BuildSessionFactory() The application is a vb.net console app running against an Oracle 9.2 database, using a ‘coding framework’ published on the web by Bill McCafferty entitled 'NHibernate Best Practices with ASP.NET' (pre S#harp Architecture). I am running version 2.1.2.4000 of NHibernate. Any assistance much appreciated. Kind Regards Paul J.

    Read the article

  • Code changes in the Dtsx file in an SSIS package not reflecting after deploying and running on the server

    - by SKumar
    I have some folder called Test-Deploy in which I keep all the dtsx files, manifest and configuration files. Whenever I want to deploy the ssis package, I run manifest file in this folder and deploy. My problem is I have to change one of the dtsx files out of it. So, I opened only that particular dtsx file BI studio, updated and built. After the build, I copied the dtsx file from bin folder and copied to my Test-Deploy folder. When I deployed and run this new package in the Test-Deploy folder, the changes I made are not reflecting in the result. I could not find any difference in the results before and after changing. My doubt is has it saved my previous dtsx file somewhere on the server and executing the same dtsx file instead of the new one?

    Read the article

  • jQuery and CodeIgniter AJAX with JSON not working

    - by thedp
    Hello, I trying to make my first AJAX with JSON call using jQuery and CodeIgniter. But for some weird reason it's not working. The jQuery code: var item = "COOL!"; $.post("http://192.168.8.138/index.php/main/test", { "item" : item }, function(data){ alert(data.result); }, "json"); The CodeIgniter code: <?php class main extends Controller { function test() { $item = trim($this->input->post('item')); $array = array('result' => $item); echo json_encode($array); } } ?> I tried to access the http://192.168.8.138/index.php/main/test page manually and it seems to be working, I got: {"result":""} I also tried to use Firebug to see XMLHttpRequest but saw nothing. I have no idea what am I doing wrong... Need help really badly. Thank you.

    Read the article

  • Clojure / HBase: How to Import HBaseTestingUtility in v0.94.6.1

    - by David Williams
    In Clojure, if I want to start a test cluster using the hbase testing utility, I have to annotate my dependencies with: [org.apache.hbase/hbase "0.92.2" :classifier "tests" :scope "test"] First of all, I have no idea what this means. According to leiningens sample project.clj ;; Dependencies are listed as [group-id/name version]; in addition ;; to keywords supported by Pomegranate, you can use :native-prefix ;; to specify a prefix. This prefix is used to extract natives in ;; jars that don't adhere to the default "<os>/<arch>/" layout that ;; Leiningen expects. Question 1: What does that mean? Question 2: If I upgrade the version: [org.apache.hbase/hbase "0.94.6.1" :classifier "tests" :scope "test"] Then I receive a ClassNotFoundException Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration Whats going on here and how do I fix it?

    Read the article

  • Rails: restful authentication setup help

    - by SuperString
    Hi I downloaded the plugin from http://github.com/techweenie/restful-authentication.git Then I run rails generate plugin authenticated user session This is the result I got: create vendor/plugins/authenticated create vendor/plugins/authenticated/MIT-LICENSE create vendor/plugins/authenticated/README create vendor/plugins/authenticated/Rakefile create vendor/plugins/authenticated/init.rb create vendor/plugins/authenticated/install.rb create vendor/plugins/authenticated/uninstall.rb create vendor/plugins/authenticated/lib create vendor/plugins/authenticated/lib/authenticated.rb invoke test_unit inside vendor/plugins/authenticated create test create test/authenticated_test.rb create test/test_helper.rb Then I tried to do rake db:migrate But I got error that says rake tasks in restful-authentication/tasks/auth.rake are deprecated. Use lib/tasks instead. I am new to rails, tried looking online but things seem to be outdated. Please help!

    Read the article

  • PHP Regular Expression

    - by saturngod
    I want to change &lt;lang class='brush:xhtml'&gt;test&lt;/lang&gt; to <pre class='brush:xhtml'>test</pre> my code like that. <?php $content="&lt;lang class='brush:xhtml'&gt;test&lt;/lang&gt;"; $pattern=array(); $replace=array(); $pattern[0]="/&lt;lang class=([A-Za-z='\":])* &lt;/"; $replace[0]="<pre $1>"; $pattern[1]="/&lt;lang&gt;/"; $replace[1]="</pre>"; echo preg_replace($pattern, $replace,$content); ?> but it's not working. How to change my code or something wrong in my code ?

    Read the article

  • Throughput measurements

    - by dotsid
    I wrote simple load testing tool for testing performance of Java modules. One problem I faced is algorithm of throughput measurements. Tests are executed in several thread (client configure how much times test should be repeated), and execution time is logged. So, when tests are finished we have following history: 4 test executions 2 threads 36ms overall time - idle * test execution 5ms 9ms 4ms 13ms T1 |-*****-*********-****-*************-| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-----| <-----------------36ms---------------> For the moment I calculate throughput (per second) in a following way: 1000 / overallTime * threadCount. But there is problem. What if one thread will complete it's own tests more quickly (for whatever reason): 3ms 3ms 3ms 3ms T1 |-***-***-***-***----------------| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-| <--------------32ms--------------> In this case actual throughput is much better because of measured throughput is bounded by the most slow thread. So, my question is how should I measure throughput of code execution in multithreaded environment.

    Read the article

  • MarkupBuilder using list

    - by tathamr
    I am currently using sql.row("statement") and storing to a list. I then am trying to setup my xml file using MarkupBuilder. Is there a better way than iterating over the list poping off an item and then parsing it to add my different column names and values? What is stored by list entry is ID='X' Period='Yearly' Lengh='test' So the XML would be something similar to: <table name='test'> <row> <column name=ID>X</column> <column name=Period>Yearly</column> <column name=Length>test</column> </row> </table>

    Read the article

  • Abort a slow flush to disk after write?

    - by Therealstubot
    Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc? I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.

    Read the article

  • how to remove a few lines from a Unicode registry file using batch commands in Windows?

    - by Cosmin
    Hi. I have a program who's generating some data in registry. I save it with "reg export HKCU\Software\ProgramName\Data data.reg" (Unicode format). I need to take it to other computer and import it there so the program from that computer could use the data. But I have to remove some text lines from data.reg. The text lines are easy to find because they contain some strings. Now I'm doing this manually (using Wordpad) every few days but maybe there is another way... Oh and I can't install other programs on these computers (the access is restricted) so I have to use batch/cmd files. What I tried so far: - redirecting the export to "con" but is visual only not in a variable; - using "for /F ..." but this works only with ANSI and removes blank lines. Can somebody please help me...? Thank you.

    Read the article

  • Ask Basic Configurator in Apache Commong Log

    - by adisembiring
    I use log4j as logger for my web application. in log4j, I can set the level log in log4j properties or log4j.xml. in log4j, we instance logger as follows: static Logger logger = Logger.getLogger(SomeClass.class); I init log4j basic configurator in a servlet file using init method. But, I usually test application using JUnit, So I init the basic configurator in setup method. after that, I test the application, and I can see the log. Because I deployed, the web in websphere. I change all of logging instance become: private Log log = LogFactory.getLog(Foo.class); I don't know how to load basic configurator using ACL. so I can't control debug level to my JUnit test. do you have any suggestion, without changing static Logger logger = Logger.getLogger(SomeClass.class); become static Logger logger = Logger.getLogger(SomeClass.class);

    Read the article

  • Reused UIWebView showing previous loaded content for a brief second on iPhone

    - by Roi
    In one of my apps I reuse a webview. Each time the user enters a certain view on reload cached data to the webview using the method - (void)loadData:(NSData *)data MIMEType:(NSString *)MIMEType textEncodingName:(NSString *)encodingName baseURL:(NSURL *)baseURL and I wait for the callback call - (void) webViewDidFinishLoad:(UIWebView *)webView. In the mean time I hide the webview and show a 'loading' label. Only when I receive webViewDidFinishLoad do I show the webview. Many times what happens is I see the previous data that was loaded to the webview for a brief second before the new data I loaded kicks in. I already added a delay of 0.2 seconds before showing the webview but it didn't help. Instead of solving this by adding more time to the delay does anyone know how to solve this issue or maybe clear old data from a webview without release and allocating it every time?

    Read the article

  • Android: Displaying Video on VideoView

    - by AndroidDev93
    I'm trying to display a video in my sdcard on the video view. Here is my code: String name = Environment.getExternalStorageDirectory() + "/test.mp4"; final VideoView videoView = (VideoView)findViewById(R.id.videoView1); videoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { public void onPrepared(MediaPlayer arg0) { videoView.start(); } }); videoView.setVideoPath(name); The file I am trying to open is called test.mp4 and its located within the sdcard folder. I get an error saying the application has unfortunately stopped. I would appreciate it if someone could help me. Thanks. EDIT : I used the debugger and found out that I get an InvocationTargetException. The detailed message says that : Failure delivering result ResultInfo{who=null, request=1001, result=-1, data=Intent { dat=file:///mnt/sdcard/test.mp4 }} to activity : java.lang.NullPointerException EDIT : I looked at the logcat again and it seems to give the error at videoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { I'm guessing either videoView or MediaPlayer is null.

    Read the article

  • nul terminating a int array

    - by robUK
    Hello, gcc 4.4.4 c89 I was just experimenting with a int array. And something just came to my mind. Can I nul terminate it. For example, I am using a 0 to nul terminate. However, 0 could well be a valid value in this array. The code below will terminate after the 5. Even though I mean 0 to be a valid number. However, I could specify the size of the array. But in this case, I don't want to this as I am just interested in this particular problem. Many thanks for any advice, #include <stdio.h> static void test(int *p); int main(void) { int arr[] = {30, 450, 14, 5, 0, 10, '\0'}; test(arr); return 0; } static void test(int *p) { while(*p) { printf("Array values [ %d ]\n", *p++); } }

    Read the article

  • Bash: Check if file was modified since used in script

    - by Thomas Münz
    I need to check in a script if a file was modified since I read it (another application can modify it in between). According to bash manual there is a "-N" test which should report if a file was modified since last read. I tried it in a small script but it seems like it doesn't work. #!/bin/bash file="test.txt" echo "test" > $file cat $file; if [ -N $file ]; then echo "modified since read"; else echo "not modified since read"; fi I also tried an alternative way by touching another file and using if [ "file1" -nt "file2 ]; but this works only on a seconds accuracy which may under rare conditions not be sufficient. Is there any other bash-inbuilt solution for this problem or I do really need to use diff or md5sum?

    Read the article

  • Need help with SQL table structure transformation

    - by Arnis L.
    I need to perform update/insert simultaneously changing structure of incoming data. Think about Shops that have defined work time for each day of the week. Hopefully, this might explain better what I'm trying to achieve: worktimeOrigin table: columns: shop_id day val data: 123 | "monday" | "9:00 AM - 18:00" 123 | "tuesday" | "9:00 AM - 18:00" 123 | "wednesday" | "9:00 AM - 18:00" shop table: columns: id worktimeDestination.id worktimeDestination table: columns: id monday tuesday wednesday My aim: I would like to insert data from worktimeOrigin table into worktimeDestination and specify appropriate worktimeDestination for shop. shop table data: 123 1 (updated) worktimeDestination table data: 1 | "9:00 AM - 18:00" | "9:00 AM - 18:00" | "9:00 AM - 18:00" (inserted) Any ideas how to do that?

    Read the article

  • How can I populate highchart jQuery plugin dynamically from MVC action?

    - by Anders Svensson
    I'm trying out the Highcharts jQuery plugin for creating charts of data in an MVC application. But I need to get the data for the function dynamically from an Action Method. How can I do that? Taking the example from the Highcharts site (http://highcharts.com/documentation/how-to-use): var chart1; // globally available $(document).ready(function() { chart1 = new Highcharts.Chart({ chart: { renderTo: 'chart-container-1', defaultSeriesType: 'bar' }, title: { text: 'Fruit Consumption' }, xAxis: { categories: ['Apples', 'Bananas', 'Oranges'] }, yAxis: { title: { text: 'Fruit eaten' } }, series: [{ name: 'Jane', data: [1, 0, 4] }, { name: 'John', data: [5, 7, 3] }] }); }); How can I get the data in there dynamically from the action method? Someone suggested I might use JSon, but couldn't specify how. If this is the case, I would really appreciate a simple and specific example, because I don't know much about JSon. Any help appreciated!

    Read the article

  • Android serialization: ImageView

    - by embo
    I have a simple class: public class Ball2 extends ImageView implements Serializable { public Ball2(Context context) { super(context); } } Serialization ok: private void saveState() throws IOException { ObjectOutputStream oos = new ObjectOutputStream(openFileOutput("data", MODE_PRIVATE)); try { Ball2 data = new Ball2(Game2.this); oos.writeObject(data); oos.flush(); } catch (Exception e) { Log.e("write error", e.getMessage(), e); } finally { oos.close(); } } But deserealization private void loadState() throws IOException { ObjectInputStream ois = new ObjectInputStream(openFileInput("data")); try { Ball2 data = (Ball2) ois.readObject(); } catch (Exception e) { Log.e("read error", e.getMessage(), e); } finally { ois.close(); } } fail with error: 03-24 21:52:43.305: ERROR/read error(1948): java.io.InvalidClassException: android.widget.ImageView; IllegalAccessException How deserialize object correctly?

    Read the article

  • A controller problem using a base CRUD model

    - by rkj
    In CodeIgniter I'm using a base CRUD My_model, but I have this small problem in my browse-controller.. My $data['posts'] gets all posts from the table called "posts". Though the author in that table is just a user_id, which is why I need to use my "getusername" function (gets the username from a ID - the ID) to grab the username from the users table. Though I don't know how to proceed from here, since it is not just one post. Therefore I need the username to either be a part of the $data['posts'] array or some other smart solution. Anyone who can help me out? function index() { $this->load->model('browse_model'); $data['posts'] = $this->browse_model->get_all(); $data['user'] = $this->browse_model->getusername(XX); $this->load->view('header'); $this->load->view('browse/index', $data); $this->load->view('footer'); }

    Read the article

  • Crystal report - Conversion from text to decimal ?

    - by 123rmn
    I have a crystal report, which takes data from an XML template. For a particular field of report, say 'Cost' the database stored procedure send data to XSD file in decimal format , but when the crystal report displays data picking from XSD, it is rounded off. When i right click on other data fields of report, I can see 'Field:table1.columnname',. But when i click on 'Cost' field, it shows 'Text:'. To my understanding, this is a text field which is mapped to pick data from XSD and since the type is text, it gives result in text hence truncating the decimal. Please suggest how can I get decimals here. P.S: This code was created by someone else, so i have no idea on what they had set at that time. I have to fix it and i have no clue about it.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Spring bean creation via deserialization

    - by mdma
    Spring has many different ways of creating beans, but is it possible to create a bean by deserializing a resource? My application has a number of Components, and each manipulates a certain type of data. During test, the data object is instantiated directly and set directly on the component, e.g. component.setData(someDataObject). At runtime, the data is available as a serialized object and read in from the serialized stream by the component. Rather than having each component explicitly deserialize it's data from the stream, it would be more consistent and flexible to have Spring deserialize the data object from a resource. Is there a DeserializerBeanFactory or something similar?

    Read the article

  • Have problem understanding the id/name of java bean

    - by symfony
    In an XmlBeanFactory (including ApplicationContext variants), you use the id or name attributes to specify the bean id(s), and at least one id must be specified in one or both of these attributes. Does it mean the following are legal? <bean id="test"> <bean name="test"> But this is illegal: <bean non_idnorname="test"> you may also or instead specify one or more bean ids (separated by a comma (,) or semicolon (;) via the name attribute. Does it mean I can specify multiple ids this way: <bean name="id1;id2,id3"> Can someone convince my doubt?

    Read the article

< Previous Page | 783 784 785 786 787 788 789 790 791 792 793 794  | Next Page >