Search Results

Search found 69126 results on 2766 pages for 'oracle data miner'.

Page 1737/2766 | < Previous Page | 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744  | Next Page >

  • How can I implement an interface member in protected ?

    - by Nicolas Dorier
    Hi, I've been quite surprise when I saw the metadata of ReadOnlyObservableCollection in VS 2008... public class ReadOnlyObservableCollection<T> : ReadOnlyCollection<T>, INotifyCollectionChanged, INotifyPropertyChanged { // Summary: // Initializes a new instance of the System.Collections.ObjectModel.ReadOnlyObservableCollection<T> // class that serves as a wrapper for the specified System.Collections.ObjectModel.ObservableCollection<T>. // // Parameters: // list: // The collection to wrap. public ReadOnlyObservableCollection(ObservableCollection<T> list); // Summary: // Occurs when an item is added or removed. protected virtual event NotifyCollectionChangedEventHandler CollectionChanged; // // Summary: // Occurs when a property value changes. protected virtual event PropertyChangedEventHandler PropertyChanged; // Summary: // Raises the System.Collections.ObjectModel.ReadOnlyObservableCollection<T>.CollectionChanged // event. // // Parameters: // args: // The event data. protected virtual void OnCollectionChanged(NotifyCollectionChangedEventArgs args); // // Summary: // Raises the System.Collections.ObjectModel.ReadOnlyObservableCollection<T>.PropertyChanged // event. // // Parameters: // args: // The event data. protected virtual void OnPropertyChanged(PropertyChangedEventArgs args); } As you can see, CollectionChanged, a member of INotifyCollectionChanged is implemented in protected... and I can't do that in my own class. .NET framework should not compile ! Does someone has an explanation of this mystery ?

    Read the article

  • Converting Python Script to Vb.NET - Involves Post and Input XML String

    - by Jason Shoulders
    I'm trying to convert a Python Script to Vb.Net. The Python Script appears to accept some XML input data and then takes it to a web URL and does a "POST". I tried some VB.NET code to do it, but I think my approach is off because I got an error back "BadXmlDataErr" plus I can't really format my input XML very well - I'm only doing string and value. The input XML is richer than that. Here is an example of what the XML input data looks like in the Python script: <obj is="MyOrg:realCommand_v1/" > <int name="priority" val="1" /> <real name="value" val="9.5" /> <str name="user" val="MyUserName" /> <reltime name="overrideTime" val="PT60S"/> </obj> Here's the Vb.net code I attempted to convert that: Dim reqparm As New Specialized.NameValueCollection reqparm.Add("priority", "1") reqparm.Add("value", "9.5") reqparm.Add("user", "MyUserName") reqparm.Add("overrideTime", "PT60S") Using client As New Net.WebClient Dim sTheUrl As String = "[My URL]" Dim responsebytes = client.UploadValues(sTheUrl, "POST", MyReqparm) Dim responsebody = (New System.Text.UTF8Encoding).GetString(responsebytes) End Using I feel like I should be doing something else. Can anyone point me to the right direction?

    Read the article

  • Android XmlSerializer limitation?

    - by Rexb
    Hi all, My task is just to get an xml string using XmlSerializer. Problem is that it seems like the serializer stops adding any new element and/or attribute to the xml document when it reaches certain length (perhaps 10,000 char?). My questions: have you experience this kind of problem? What could be possible solutions? Here is my sample test code: public void doSerialize() throws XmlPullParserException, IllegalArgumentException, IllegalStateException, IOException { StringWriter writer = new StringWriter(); XmlSerializer serializer = XmlPullParserFactory.newInstance().newSerializer(); serializer.setOutput(writer); serializer.startDocument(null, null); serializer.startTag(null, "START"); for (int i = 0; i < 20; i++) { serializer.attribute(null, "ATTR" + i, "VAL " + i); } serializer.startTag(null, "DATA"); for (int i = 0; i < 500; i++) { serializer.attribute(null, "attr" + i, "value " + i); } serializer.endTag(null, "DATA"); serializer.endTag(null, "START"); serializer.endDocument(); String xml = writer.toString(); // value: until 493rd attribute int n = xml.length(); // value: 10125 } Any help will be greatly appreciated.

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • Is there a simple way to "roll your own forms" for mysql in php, for example in jquery?

    - by talkingnews
    I've been googling around for a really simple way of making what is, in effect, nothing more than an enhanced phpMySql. In a mysql database, I have: Name, address, phone, website etc, plus 2 or 3 custom fields. This data is pulled out to make a website. All I want is to be able to make a freeform form, a bit like Access, but for the web, and the only thing I want to do over and above normal field editing would be to have a list of when I contact them, what was said, and perhaps a reminder when the next action is due. I've looked at so many CRMs my mind is boggling, and they all do WAY more than I need. I don't have leads or accounts, all I have is the need to make sure than when I update the person's details, and for that data to be in the same DB as my site is generate from. I'm happy to learn if I can get pointed in the right direction, and I have a feeling that something like what I want might lie in the direction of jquery. It's just that there's so much good jquery stuff about, I can't see the wood for the trees! Thanks.

    Read the article

  • EJB and JPA and @OneToMany - Transaction too long?

    - by marioErr
    Hello. I'm using EJB and JPA, and when I try to access PhoneNumber objects in phoneNumbers attribute of Contact contact, it sometimes take several minutes for it to actually return data. It just returns no phoneNumbers, not even null, and then, after some time, when i call it again, it magically appears. This is how I access data: for (Contact c : contactFacade.findAll()) { System.out.print(c.getName()+" "+c.getSurname()+" : "); for (PhoneNumber pn : c.getPhoneNumbers()) { System.out.print(pn.getNumber()+" ("+pn.getDescription()+"); "); } } I'm using facade session ejb generated by netbeans (basic CRUD methods). It always prints correct name and surname, phonenumbers and description are only printed after some time (it varies) from creating it via facade. I'm guessing it has something to do with transactions. How to solve this? These are my JPA entities: contact @Entity public class Contact implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; private String surname; @OneToMany(cascade = CascadeType.REMOVE, mappedBy = "contact") private Collection<PhoneNumber> phoneNumbers = new ArrayList<PhoneNumber>(); phonenumber @Entity public class PhoneNumber implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String number; private String description; @ManyToOne() @JoinColumn(name="CONTACT_ID") private Contact contact;

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • help me with function resize , not working with png

    - by user304828
    it not work with png created a thumb png but haven't data , like null data :D with jpg , jpeg still working without error why ? function thumbnail($pathtoFile,$thumWidth,$pathtoThumb) { //infor of image $infor = pathinfo($pathtoFile); // Setting the resize parameters list($width, $height) = getimagesize($pathtoFile); $modwidth = $thumWidth; $modheight = floor( $height * ( $modwidth / $width )); // Resizing the Image $thumb = imagecreatetruecolor($modwidth, $modheight); switch(strtolower($infor['extension'])) { case 'jpeg': case 'jpg': $image = imagecreatefromjpeg($pathtoFile); break; case 'gif': $image = imagecreatefromgif($pathtoFile); break; case 'png': $image = imagecreatefrompng($pathtoFile); break; } imagecopyresampled($thumb, $image, 0, 0, 0, 0, $modwidth, $modheight, $width, $height); switch(strtolower($infor['extension'])) { case 'jpeg': case 'jpg': imagejpeg($thumb,$pathtoThumb, 70); break; case 'gif': imagegif($thumb,$pathtoThumb, 70); break; case 'png': imagepng($thumb,$pathtoThumb, 70); break; } //destroy tmp imagedestroy($thumb); }

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • problems with async jquery and loops

    - by Seth Vargo
    I am so confused. I am trying to append portals to a page by looping through an array and calling a method I wrote called addModule(). The method gets called the right number of times (checked via an alert statement), in the correct order, but only one or two of the portals actually populate. I have a feeling its something with the loop and async, but it's easier explained with the code: moduleList = [['weather','test'],['test']]; for(i in moduleList) { $('#content').append(''); for(j in moduleList[i]) { addModule(i,moduleList[i][j]); //column,name } } function addModule(column,name) { alert('adding module ' + name); $.get('/modules/' + name.replace(' ','-') + '.php',function(data){ $('#'+column).append(data); }); } for each array in the main array, I append a new column, since that's what each sub-array is - a column of portals. Then I loop through that sub array and call addModule on that column and the name of that module (which works correctly). Something buggy happens in my addModule method that it only adds the first and last modules, or sometimes a middle one, or sometimes none at all... im so confused!

    Read the article

  • Bootstarap: want to trigger some costum events on layout change

    - by DS_web_developer
    So, there are some events in my app that changes layout on my page (re-position of the elements)... mostly is done by bootstrap collapsing and fading (tabs, collapsibles, accordions)... I would like to fire an event whenever a change is about to happen and another when the change is done.. right now I came out with something like that $('.collapse').on("shown hidden", function(){ jQuery(myAPP).trigger("layoutchanged"); }); $('.collapse').on("show hide", function(){ jQuery(myAPP).trigger("layoutchanging"); }); and then... jQuery(myAPP).on("layoutchanging", function(e){ log("Start changing"); }); jQuery(myAPP).on("layoutchanged", function(e){ log("Layout changed"); }); it works for collapse and accordions OK. but on tabs, where the markup is like this: <ul class="nav nav-tabs can_deactivate"> <li><a href="#tab_1" data-toggle="tab">Open Tab 1</a></li> <li><a href="#tab_2" data-toggle="tab">Open Tab 2</a></li> </ul> <div class="tab-content"> <div class="tab-pane fade" id="tab_1"> Lorem ipsum </div> <div class="tab-pane fade" id="tab_2"> Lorem ipsum </div> </div> Works only on show, but not on hide... what can I do? JS FIDDLE: http://jsfiddle.net/KL7Af/

    Read the article

  • Multi-threaded library calls in ASP.NET page request.

    - by ProfK
    I have an ASP.NET app, very basic, but right now too much code to post if we're lucky and I don't have to. We have a class called ReportGenerator. On a button click, method GenerateReports is called. It makes an async call to InternalGenerateReports using ThreadPool.QueueUserWorkItem and returns, ending the ASP.NET response. It doesn't provide any completion callback or anything. InternalGenerateReports creates and maintains five threads in the threadpool, one report per thread, also using QueueUserWorkItem, by 'creating' five threads, also with and waiting until calls on all of them complete, in a loop. Each thread uses an ASP.NET ReportViewer control to render a report to HTML. That is, for 200 reports, InternalGenerateReports should create 5 threads 40 times. As threads complete, report data is queued, and when all five have completed, report data is flushed to disk. My biggest problems are that after running for just one report, the aspnet process is 'hung', and also that at around 200 reports, the app just hangs. I just simplified this code to run in a single thread, and this works fine. Before we get into details like my code, is there anything obvious in the above scendario that might be wrong?

    Read the article

  • Help with GetGlyphOutline function(WinAPI)

    - by user146780
    I want to use this function to get contours and within these contours, I want to get cubic bezier. I think I have to call it with GGO_BEZIER. What puzzles me is how the return buffer works. "A glyph outline is returned as a series of one or more contours defined by a TTPOLYGONHEADER structure followed by one or more curves. Each curve in the contour is defined by a TTPOLYCURVE structure followed by a number of POINTFX data points. POINTFX points are absolute positions, not relative moves. The starting point of a contour is given by the pfxStart member of the TTPOLYGONHEADER structure. The starting point of each curve is the last point of the previous curve or the starting point of the contour. The count of data points in a curve is stored in the cpfx member of TTPOLYCURVE structure. The size of each contour in the buffer, in bytes, is stored in the cb member of TTPOLYGONHEADER structure. Additional curve definitions are packed into the buffer following preceding curves and additional contours are packed into the buffer following preceding contours. The buffer contains as many contours as fit within the buffer returned by GetGlyphOutline." I'm really not sure how to access the contours. I know that I can change a pointer another type of pointer but i'm not sure how I go about getting the contours based on this documentation. Thanks

    Read the article

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • Adding <tr> from repeater's ItemDataBound Event

    - by nemiss
    My repeater's templates generate a table, where each item is a table row. When a very very specific condition is met (itemdata), I want to add an additional row to the table from this event. How can I do that? protected void rptData_ItemDataBound(object sender, RepeaterItemEventArgs e) { if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { bool tmp = bool.Parse(DataBinder.Eval(e.Item.DataItem, "somedata").ToString()); if (!tmp && e.Item.ItemIndex != 0) { //Add row after this item } } } I can use e.Item.Controls.Add() and add TableRow but for that I need to locate a table right? How can I solve that? UPDATE I will explain now why I need this: I am creating a sort of message board, where data entries are displayed in a tabled style. The first items in the table are "important" items after those items, i want to add this row. I could solved it using 2 repeaters, where the first repeater will be bounded to pinned items, and the second repeater will be bounded to refular items. But I don't want to have to repeaters nor I want to complex the business logic for separating the fetched data to pinned and not-pinned collection. I think the best opton is to do it "onfly", using one repeater and one datasource.

    Read the article

  • Paperclip failing to upload on specific scaffold, yet works on others

    - by Saifis
    I know there are tons of questions about paperclip, but I failed to find the answer to my problem. I know its prob just something simple, but I I'm running out of hair to pull out. I have paperclip working on other parts of my project, they work with no problem, however, a certain scaffold fails to upload, all the attributes to the uploaded file are nil. Here are the relevant information. Model: has_attached_file :foo, :styles => { :thumb => "140x140>" }, :url => "/data/:id/:style/:basename.:extension", :path => ":rails_root/public/data/:id/:style/:basename.:extension" View: <% form_for(@bar, :html => { :multipart => true }) do |f| %> <%= f.error_messages %> ---------- <li><%= f.label :top %> <%= f.file_field :foo %></li> ---------- <ul><%= f.submit "Save" %></ul> <% end %> Also, comparing the logs to the parts that work, the :foo attribute seems to be passing different values than in the ones that work. In the logs, when the paperclip function works, it looks like this "image"=>#<File:/var/folders/M5/M5HEb+WhFxmqNDGH5s-pNE+++TI/-Tmp-/RackMultipart20100512-1302-5e2e6e-0> when it does not, it seems to pass the file name directly "foo"=>"foo_image.png" I am developing locally on MacOSX using local rails and ruby libs.

    Read the article

  • Why do I get an error while trying to set the content of a tabspec in android?

    - by rushinge
    I have an android activity in which I'm using tabs. public class UnitActivity extends TabActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.unit_view); TabHost tabHost = getTabHost(); TabSpec spec; spec = tabHost.newTabSpec("controls"); spec.setIndicator("Control"); spec.setContent(R.layout.unit_control); tabHost.addTab(spec); spec = tabHost.newTabSpec("data"); spec.setIndicator("Data"); spec.setContent(R.layout.unit_data); tabHost.addTab(spec); } } However when I run the program it crashes with the error: "Could not create tab content because could not find view with id 2130903042". I don't understand what the problem is because R.layout.unit_data refers to a layout file in my resource directory (res/layout/unit_data.xml) as far as I can tell unit_data.xml is well formed and I've even referenced it successfully in another activity class UnitData extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.unit_data); Toast.makeText(this, "Hi from UnitData.onCreate", 5); } } which does not give an error and renders the layout just fine. What's going on? Why can't I reference this layout when creating a tab?

    Read the article

  • Visual Studio - Edit source code located in a database

    - by mfeingold
    I am building something similar to Server Explorer for Apache CouchDB. One of the things necessary is to be able to edit CouchDB view definitions which in CouchDB are JavaScript functions. How can I trick Visual Studio into using my object to retrieve and save the content of the JavaScript function but still use the rest of it - I am happy with editor itself and have no intention of writing my own Editor/Language Service, etc. The latter would be much bigger effort than what this project warrants Edit After more digging I am still stuck. Here is what I know: IVsUIShellOpenDocument interface provides a method OpenStandardEditor which can be used to open the standard Visual Studio editor. As one of the parameters this method takes a Pointer to the IUnknown interface of the document data object. This object is supposed to implement several interfaces described in many places all over the MSDN. Visual Studio SDK also provides a 'sample' implementation of the document data object VsTextBufferClass. I can create an instance of this class and when I pass the pointer to the instance to the OpenStandardEditor I can see my editor and it seems to work ok. When I try to implement my own class implementing the same interfaces (IVsTextBuffer, VsTextBuffer, IVsTextLines) OpenStandardEditor method returns success, but VS bombs out on call editor.Show() with an access violation. My suspicion is that VsTextBufferClass also implements some other interface(s) but not in C# way but rather in the good old COM way. I just do not know which one(s). Any thoughts?

    Read the article

  • How can I get all content within <table></table> tags using a regex?

    - by Bob Dylan
    So I'm writing an application that will do a little screen scrapping. All the pages (about 1000 or so) contain this line: <table border="0" cellspacing="3"> <tr><td>First rows stuff</td></tr> <tr> <td> The data I want is in here <br /> and it's seperated by these annoying <br /> 's. No id's, classes, or even a single <p> tag. Just a bunch of <br /> tags. </td> </tr> </table> So I just need to get the data within the 2nd row out. How can I do this? Should I use a regex or something else?

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • New record may be written twice in clusterd index structure

    - by Cupidvogel
    As per the article at Microsoft, under the Test 1: INSERT Performance section, it is written that For the table with the clustered index, only a single write operation is required since the leaf nodes of the clustered index are data pages (as explained in the section Clustered Indexes and Heaps), whereas for the table with the nonclustered index, two write operations are required—one for the entry into the index B-tree and another for the insert of the data itself. I don't think that is necessarily true. Clustered Indexes are implemented through B+ tree structures, right? If you look at at this article, which gives a simple example of inserting into a B+ tree, we can see that when 8 is initially inserted, it is written only once, but then when 5 comes in, it is written to the root node as well (thus written twice, albeit not initially at the time of insertion). Also when 8 comes in next, it is written twice, once at the root and then at the leaf. So won't it be correct to say, that the number of rewrites in case of a clustered index is much less compared to a NIC structure (where it must occur every time), instead of saying that rewrite doesn't occur in CI at all?

    Read the article

  • curl_multi_exec stops if one url is 404, how can I change that?

    - by Rob
    Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions: 1: Why does it stop? That doesn't make sense to me. 2: How can I make it continue? EDIT: Here is my code: $SQL = mysql_query("SELECT url FROM shells") ; $mh = curl_multi_init(); $handles = array(); while($resultSet = mysql_fetch_array($SQL)){ //load the urls and send GET data $ch = curl_init($resultSet['url'] . $fullcurl); //Only load it for two seconds (Long enough to send the data) curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_multi_add_handle($mh, $ch); $handles[] = $ch; } // Create a status variable so we know when exec is done. $running = null; //execute the handles do { // Call exec. This call is non-blocking, meaning it works in the background. curl_multi_exec($mh,$running); // Sleep while it's executing. You could do other work here, if you have any. sleep(2); // Keep going until it's done. } while ($running > 0); // For loop to remove (close) the regular handles. foreach($handles as $ch) { // Remove the current array handle. curl_multi_remove_handle($mh, $ch); } // Close the multi handle curl_multi_close($mh);

    Read the article

  • django-cms lighttpd redirect domain to url

    - by Robert
    Hello, I am using djano-cms for my site, but instead of language alias /en/ /de/ I need to use another domain. I would like to avoid running multiple django instances, and instead I would like to use lighttpd redirects if possible. I would like requests coming to domain2.com getting data from domain.com/en . The best would be if the user entering: domain2.com/offer got transparently data from domain.com/en/offer Tried many solutions with url.redirect, url.rewrite but none seems to work as desired. Also tried with: http://stackoverflow.com/questions/261904/matching-domains-with-regex-for-lighttpd-mod-evhost-www-domain-com-domain-com but that didn't work. Please help. This is my lighttpd configuration. $HTTP["host"] == "^domain2\.com" { url.redirect = ("^/(.*)" => "http://domain.com/en/$1") } $HTTP["host"] =~ "^domain\.com" { server.document-root = "/var/www/django/projects/domain/" accesslog.filename = "/var/log/lighttpd/domain.log-access.log" server.errorlog = "/var/log/lighttpd/www.domain-error.log" fastcgi.server = ( "/domain-service.fcgi" => ( "main" => ( "socket" => "/tmp/django-domain.sock", "check-local" => "disable", ) ), ) alias.url = ( "/media/" => "/var/www/django/projects/domain/media/", ) url.rewrite-once = ( "^(/site_media.*)$" => "$1", "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/domain-service.fcgi$1", } Thanks

    Read the article

  • Best approach for Java/Maven/JPA/Hibernate build with multiple database vendor support?

    - by HDave
    I have an enterprise application that uses a single database, but the application needs to support mysql, oracle, and sql*server as installation options. To try to remain portable we are using JPA annotations with Hibernate as the implementation. We also have a test-bed instance of each database running for development. The app is building nicely in Maven, and I've played around with the hibernate3-maven-plugin and can auto-generate DDL for a given database dialect. What is the best way to approach this so that individual developers can easily test against all three databases and our Hudson based CI server can build things propertly. More specifically: 1) I thought the hbm2ddl goal in the hibernate3-maven-plugin would just generate a schema file, but apparently it connects to a live database and attempts to create the schema. Is there a way to have this just create the schema file for each database dialect without connecting to a database? 2) If the hibernate3-maven-plug insists on actually creating the database schema, is there a way to have it drop the database and recreate it before creating the schema? 3) I am thinking that each developer (and the hudson build machine) should have their own separate database on each database server. Is this typical? 4) Will developers have to run Maven three times...once for each database vendor? If so, how do I merge the results on the build machine? 5) There is a hbm2doc goal within hibernate3-maven-plugin. It seems overkill to run this three times...I gotta believe it'd be nearly identical for each database.

    Read the article

< Previous Page | 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744  | Next Page >