Search Results

Search found 8268 results on 331 pages for 'difference'.

Page 308/331 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Zero code coverage with cobertura 1.9.2 but tests are working

    - by eraonel
    I run the code coverage target: <junit fork="yes" dir="${basedir}" failureProperty="test.failed"> <!-- Note the classpath order: instrumented classes are before the original (uninstrumented) classes. This is important. --> <classpath path="${instrumented.dir}" /> <classpath path="${classes.dir}" /> <classpath refid="classpath" /> <!-- The instrumented classes reference classes used by the Cobertura runtime, so Cobertura and its dependencies must be on your classpath. --> <classpath refid="cobertura.classpath" /> <formatter type="xml" /> <!--<test name="${testcase}" todir="${reports.xml.dir}" if="testcase" />--> <batchtest fork="yes" todir="${reports.xml.dir}"> <fileset dir="${classes.dir}"> <include name="**/generated/AllTests.class" /> </fileset> </batchtest> </junit> <junitreport todir="${reports.xml.dir}"> <fileset dir="${reports.xml.dir}"> <include name="TEST-*.xml" /> </fileset> <report format="frames" todir="${reports.html.dir}" /> </junitreport> Then I get the following output ( when using fork="true"): java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at net.sourceforge.cobertura.util.FileLocker.lock(FileLocker.java:124) at net.sourceforge.cobertura.coveragedata.ProjectData.saveGlobalProjectData(ProjectData.java:331) at net.sourceforge.cobertura.coveragedata.SaveTimer.run(SaveTimer.java:31) at java.lang.Thread.run(Thread.java:595) Caused by: java.io.IOException: No locks available at sun.nio.ch.FileChannelImpl.lock0(Native Method) at sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:784) at java.nio.channels.FileChannel.lock(FileChannel.java:865) ... 8 more --------------------------------------- Unable to get lock on /vobs/rnc/rrt/roam2/roamSs/RoamMao_swb/RoamMao_bldu/ant_build/cobertura.ser.lock: null This is known to happen on Linux kernel 2.6.20. Make sure cobertura.jar is in the root classpath of the jvm process running the instrumented code. If the instrumented code is running in a web server, this means cobertura.jar should be in the web server's lib directory. Don't put multiple copies of cobertura.jar in different WEB-INF/lib directories. Only one classloader should load cobertura. It should be the root classloader. I am using Ant 1.7.0 and cobertura 1.9.2. Any ideas why there is no coverage? Test run ok as I see in my target. I have tried to switch java versions ( 1.5.0_06 and 1.6.0_10) but no difference.

    Read the article

  • Direct show video renderers suck?

    - by Daniel
    So I've been looking into the world of media playback for windows and I've started making a C# Media Player using DirectShow. I started off using the VRM-7 windowed video renderer and it was brilliant except it had a couple of small problems (multi monitors, fullscreen). But after some research I found that it's deprecated and I should be using VRM9. So I changed it to use VRM9 windowless then found out that was an old post rofl _< so finally I'm using Vista/Win7 (or XP .net 3) Enhanced Video Renderer (EVR) which is apparently the most up to date Microsoft video renderer and has all the flashy performance/quality things added to it. (tbh I haven't noticed any difference but maybe I need a blue-ray or HQ video to notice it). With using EVR everything is working fine except resizing the video. Its really laggy/choppy/teary and problem something to do with its frame queueing mechanism. To demonstrate my problem open up windows media player classic. View - Options - Playback - output Chose the "EVR" DirectShow Video renderer Now restart wmp class and play a video, while it's playing click and drag a corner to resize it. You'll notice its horribly laggy. This is the exact same problem i am having. But if you chose "EVR Custom Pres. *" or EVR Sync *" resizing works beautifully! So i tried googling around for anything about EVR resizing issues and how to fix it but i couldn't believe how little i could find. I'm guessing "Custom Pres." stands for "Custom Presenter" which sounds like they made their own. Also you'll notice on the right hand size when you swap between EVR and the other EVR's the Resizer drop down on the right greys out. So basically I wan't to know how I can fix this retarded resizing problem and is there any decent documentation out there? There is a fair bit for VMR7/9 but not much for EVR. I downloaded the DirectX SDK which apparently has samples but it was a waste of 500mb of bandwidth as it had nothing relevant. Perhaps there is some way to force it not queueing up frames if that is the problem? If you want code say the word and I'll paste some in. But it's really quite simple and nothing much happens, i'm convinced it's a problem with the EVR renderer. EDIT: Oh and one other thing, what does VLC use? If you go into vlc options and change the renderer to anything but default, they all suck. So is it using VMR7? Or its own?

    Read the article

  • NSOperation inside NSOperationQueue not being executed

    - by Martin Garcia
    I really need help here. I'm desperate at this point. I have NSOperation that when added to the NSOperationQueue is not being triggered. I added some logging to see the NSOperation status and this is the result: Queue operations count = 1 Queue isSuspended = 0 Operation isCancelled? = 0 Operation isConcurrent? = 0 Operation isFinished? = 0 Operation isExecuted? = 0 Operation isReady? = 1 Operation dependencies? = 0 The code is very simple. Nothing special. LoadingConflictEvents_iPad *loadingEvents = [[LoadingConflictEvents_iPad alloc] initWithNibName:@"LoadingConflictEvents_iPad" bundle:[NSBundle mainBundle]]; loadingEvents.modalPresentationStyle = UIModalPresentationFormSheet; loadingEvents.conflictOpDelegate = self; [self presentModalViewController:loadingEvents animated:NO]; [loadingEvents release]; ConflictEventOperation *operation = [[ConflictEventOperation alloc] initWithParameters:wiLr.formNumber pWI_ID:wiLr.wi_id]; [queue addOperation:operation]; NSLog(@"Queue operations count = %d",[queue operationCount]); NSLog(@"Queue isSuspended = %d",[queue isSuspended]); NSLog(@"Operation isCancelled? = %d",[operation isCancelled]); NSLog(@"Operation isConcurrent? = %d",[operation isConcurrent]); NSLog(@"Operation isFinished? = %d",[operation isFinished]); NSLog(@"Operation isExecuted? = %d",[operation isExecuting]); NSLog(@"Operation isReady? = %d",[operation isReady]); NSLog(@"Operation dependencies? = %d",[[operation dependencies] count]); [operation release]; Now my operation do many things on the main method, but the problem is never being called. The main is never executed. The most weird thing (believe me, I'm not crazy .. yet). If I put a break point in any NSLog line or in the creation of the operation the main method will be called and everything will work perfectly. This have been working fine for a long time. I have been making some changes recently and apparently something screw things up. One of those changes was to upgrade the device to iOS 5.1 SDK (iPad). To add something, I have the iPhone (iOS 5.1) version of this application that use the same NSOperation object. The difference is in the UI only, and everything works fine. Any help will be really appreciated. Regards,

    Read the article

  • dbms_xmlschema fail to validate with complexType

    - by Andrew
    Preface: This works on one Oracle 11gR1 (Solaris 64) database and not on a second and we can't figure out the difference between the two databases. Somehow the complexType causes the validation to fail with this error: ORA-31154: invalid XML document ORA-19202: Error occurred in XML processing LSX-00200: element "shiporder" not empty ORA-06512: at "SYS.XMLTYPE", line 354 ORA-06512: at line 13 But the schema is valid (passes this online test: http://www.xmlme.com/Validator.aspx) -- Cleanup any existing schema begin dbms_xmlschema.deleteschema('shiporder.xsd',dbms_xmlschema.DELETE_CASCADE); end; -- Define the problem schema (adapted from http://www.w3schools.com/schema/schema_example.asp) begin dbms_xmlschema.registerSchema('shiporder.xsd','<?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="shiporder"> <xs:complexType> <xs:sequence> <xs:element name="orderperson" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>',owner=>'SCOTT'); end; -- Attempt to validate declare bbb xmltype; begin bbb := XMLType('<?xml version="1.0" encoding="ISO-8859-1"?> <shiporder xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="shiporder.xsd"> <orderperson>John Smith</orderperson> </shiporder>'); XMLType.schemaValidate(bbb); end; Now if I gut the schema definition and leave only a string in the XML then the validation passes: begin dbms_xmlschema.deleteschema('shiporder.xsd',dbms_xmlschema.DELETE_CASCADE); end; begin dbms_xmlschema.registerSchema('shiporder.xsd','<?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="shiporder" type="xs:string"/> </xs:schema>',owner=>'SCOTT'); end; DECLARE xml XMLTYPE; BEGIN xml := XMLTYPE('<?xml version="1.0" encoding="ISO-8859-1"?> <shiporder xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="shiporder.xsd"> John Smith </shiporder>'); XMLTYPE.schemaValidate(xml); END;

    Read the article

  • UITableViewCell outlets not set during bundle load (possibly very elementary question)

    - by Jan Zich
    What are the most common reasons for an outlet (a class property) not being set during a bundle load? I'm sorry; most likely I'm not using the correct terms. It's my first steps with iPhone OS development and Objective-C, so please bear with me. Here is more details. Basically, I'm trying to create a table view based form with a fixed number of static rows. I followed this example: http://developer.apple.com/iphone/library/documentation/userexperience/conceptual/TableView_iPhone/TableViewCells/TableViewCells.html Scroll down to The Technique for Static Row Content please. I have one nib file with one table view, three table cells and all connections set as in the example. The problem is that the corresponding cell properties in my controller are never initialised. I get an exception in cellForRowAtIndexPath complaining that the returned cell is nil: UITableView dataSource must return a cell from tableView:cellForRowAtIndexPath. Here are the relevant parts from the implementation of the controller: @synthesize cellA; @synthesize cellB; @synthesize cellC; - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 3; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { switch (indexPath.row) { case 0: return cellA; break; case 1: return cellB; break; case 2: return cellC; break; default: return nil; } } And here is the interface part: @interface AssociatePhoneViewController : UITableViewController { UITableViewCell *cellA; UITableViewCell *cellB; UITableViewCell *cellB; } @property (nonatomic, retain) IBOutlet UITableViewCell *cellA; @property (nonatomic, retain) IBOutlet UITableViewCell *cellB; @property (nonatomic, retain) IBOutlet UITableViewCell *cellC; @end This must be possibly one of the the most embarrassing questions on StackOverflow. It looks like the most basic example code. Is is possible that the cells are not instantiated with the nib file? I have them on the same level before the tabula view in the nib file. I tried to move them after the table view, but it did not make any difference. Are table cells in some way special? Do I need to set some flag or some property on them in the nib file? I was under the impression that all classes (views, windows, controllers …) listed in a nib file are simply instantiated (and linked using the provided connections). Could it possibly be some memory issue? The cell properties in my controller are not defined in any special way.

    Read the article

  • C++ adding friend to a template class in order to typecast

    - by user1835359
    I'm currently reading "Effective C++" and there is a chapter that contains code similiar to this: template <typename T> class Num { public: Num(int n) { ... } }; template <typename T> Num<T> operator*(const Num<T>& lhs, const Num<T>& rhs) { ... } Num<int> n = 5 * Num<int>(10); The book says that this won't work (and indeed it doesn't) because you can't expect the compiler to use implicit typecasting to specialize a template. As a soluting it is suggested to use the "friend" syntax to define the function inside the class. //It works template <typename T> class Num { public: Num(int n) { ... } friend Num operator*(const Num& lhs, const Num& rhs) { ... } }; Num<int> n = 5 * Num<int>(10); And the book suggests to use this friend-declaration thing whenever I need implicit conversion to a template class type. And it all seems to make sense. But why can't I get the same example working with a common function, not an operator? template <typename T> class Num { public: Num(int n) { ... } friend void doFoo(const Num& lhs) { ... } }; doFoo(5); This time the compiler complaints that he can't find any 'doFoo' at all. And if i declare the doFoo outside the class, i get the reasonable mismatched types error. Seems like the "friend ..." part is just being ignored. So is there a problem with my understanding? What is the difference between a function and an operator in this case?

    Read the article

  • How to set header font style as bold for the header of the table in a pdf file, in jsf

    - by Radhika
    Hi I have used PdfPTable to convert table data into a pdf file using com.itextpdf.text.pdf.PdfPTable. Table is displaying, but table data and the header are in same style. To make difference i have to set the header font style to bold. can anybody help me out in this, I have attached my code here.. Thanks in advance import java.awt.Color; import java.util.ArrayList; import java.util.List; import javax.faces.model.ListDataModel; import com.mypackage.core.filter.domainobject.FilterResultDO; import com.itextpdf.text.Font; import com.itextpdf.text.FontFactory; import com.itextpdf.text.Phrase; import com.itextpdf.text.pdf.PdfPTable; public class PDFGenerator { //This method will generate PDF for Filter Result Screen (only DataTable level) @SuppressWarnings("unchecked") public static PdfPTable generatePDF(PdfPTable table,List filterResultDOList ,List filterResultHeaderList ) { //Initialize the table with number of columns required for the Datatable header int numberOfFilterLabelCols = filterResultHeaderList.size(); //PDF Table Frame table = new PdfPTable(numberOfFilterLabelCols); //Getting Filter Detail Table Heading for(int i = 0 ; i < numberOfFilterLabelCols; i++) { ColumnHeader commandHeaderObj = filterResultHeaderList.get(i); table.addCell(commandHeaderObj.getLabel()); } //Getting Filter Detail Data (Rows X Cols) FilterResultDO filterResultDOObj = filterResultDOList.get(0); List filterResultDataList = filterResultDOObj.getFilterResultLst(); int numberOfFilterDataRows = filterResultDataList.size(); //each row iteration for(int row = 0; row < numberOfFilterDataRows; row++) { List filterResultCols = filterResultDataList.get(row); int numberOfFilterDataCols = filterResultCols.size(); //columns iteration of each row for(int col = 0; col < numberOfFilterDataCols ; col++) { String filterColumnsValues = (String) filterResultCols.get(col); table.addCell(filterColumnsValues); } } return table; }//generatePDF }

    Read the article

  • AsyncListViewAdapter + SimplePager, why is inactive pager clearing the table?

    - by Jaroslav Záruba
    I'm trying to make CellTable work together with AsyncListViewAdapter<T> and SimplePager<T>. The data gets displayed, but when the pager should be 'deaf' (meaning when all existing data are displayed) it still receives clicks and, more importantly, makes the displayed data go away. Instead of my data 'loading' indicator gets displayed, and it keep loading and loading... Obviously nothing gets loaded, as it doesn't even call the onRangeChanged handler. I went through the code-snippets in this thread, but I can't see anything suspicions on what I've been doing. Is there some obvious answer to a rookie mistake? I shrinked my variable names, hopefully it won't wrap too much. protected class MyAsyncAdapter extends AsyncListViewAdapter<DTO> { @Override protected void onRangeChanged(ListView<DTO> v) { /* * doesn't even get called on [go2start/go2end] click :( */ Range r = v.getRange(); fetchData(r.getStart(), r.getLength()); } } private void addTable() { // table: CellTable<DTO> table = new CellTable<DTO>(10); table.addColumn(new Column<DTO, String>(new TextCell()) { @Override public String getValue(DTO myDto) { return myDto.getName(); } }, "Name"); // pager: SimplePager<DTO> pager = new SimplePager<DTO>(table); table.setPager(pager); adapter = new MyAsyncAdapter(); adapter.addView(table); // does not make any difference: // adapter.updateDataSize(0, false); // adapter.updateDataSize(10, true); VerticalPanel vPanel = new VerticalPanel(); vPanel.add(table); vPanel.add(pager); RootLayoutPanel.get().add(vPanel); } // success-handler of my fetching AsyncCallback @Override public void onSuccess(List<DTO> data) { // AsyncCallback<List<DTO>> has start field adapter.updateViewData(start, data.size(), data); if(data.size() < length) adapter.updateDataSize(start + data.size(), true); } Regards J. Záruba

    Read the article

  • Is there a scheduling algorithm that optimizes for "maker's schedules"?

    - by John Feminella
    You may be familiar with Paul Graham's essay, "Maker's Schedule, Manager's Schedule". The crux of the essay is that for creative and technical professionals, meetings are anathema to productivity, because they tend to lead to "schedule fragmentation", breaking up free time into chunks that are too small to acquire the focus needed to solve difficult problems. In my firm we've seen significant benefits by minimizing the amount of disruption caused, but the brute-force algorithm we use to decide schedules is not sophisticated enough to handle scheduling large groups of people well. (*) What I'm looking for is if there's are any well-known algorithms which minimize this productivity disruption, among a group of N makers and managers. In our model, There are N people. Each person pi is either a maker (Mk) or a manager (Mg). Each person has a schedule si. Everyone's schedule is H hours long. A schedule consists of a series of non-overlapping intervals si = [h1, ..., hj]. An interval is either free or busy. Two adjacent free intervals are equivalent to a single free interval that spans both. A maker's productivity is maximized when the number of free intervals is minimized. A manager's productivity is maximized when the total length of free intervals is maximized. Notice that if there are no meetings, both the makers and the managers experience optimum productivity. If meetings must be scheduled, then makers prefer that meetings happen back-to-back, while managers don't care where the meeting goes. Note that because all disruptions are treated as equally harmful to makers, there's no difference between a meeting that lasts 1 second and a meeting that lasts 3 hours if it segments the available free time. The problem is to decide how to schedule M different meetings involving arbitrary numbers of the N people, where each person in a given meeting must place a busy interval into their schedule such that it doesn't overlap with any other busy interval. For each meeting Mt the start time for the busy interval must be the same for all parties. Does an algorithm exist to solve this problem or one similar to it? My first thought was that this looks really similar to defragmentation (minimize number of distinct chunks), and there are a lot of algorithms about that. But defragmentation doesn't have much to do with scheduling. Thoughts? (*) Practically speaking this is not really a problem, because it's rare that we have meetings with more than ~5 people at once, so the space of possibilities is small.

    Read the article

  • What does these FindBug messages show?

    - by Hans Klock
    Not every description from from http://findbugs.sourceforge.net/bugDescriptions.html is clear to me. Sure, I can study the implementation but if somebody is more experienced then me, some explanation and examples would be great. Do you have some examples for UI_INHERITANCE_UNSAFE_GETRESOURCE when this is getting a problem? In BX_UNBOXED_AND_COERCED_FOR_TERNARY_OPERATOR I don't see the problem either. If one type is "bigger" then the other, for example int and float, then the result is float. If its Integer and Float its the wrapper Float too. That's what I expect. Does the GC_UNRELATED_TYPES really help to find errors? Isn't it the job of the compiler to check, if--taking the given example--Foo can't go into a Collection<String>. Does HE_SIGNATURE_DECLARES_HASHING_OF_UNHASHABLE_CLASS mean something like bla(Foo f){hashtable.put(f);}, where ´Foo´ is not hashable? Does FingBugs "see" the subclasses too? NP_GUARANTEED_DEREF_ON_EXCEPTION_PATH is stronger "wrong" then NP_ALWAYS_NULL_EXCEPTION? Why two error cases and with NP_NULL_ON_SOME_PATH_EXCEPTION even one more? Sounds very similar to me. What is an example of SIO_SUPERFLUOUS_INSTANCEOF? Something like foo(String s){if (s intenceof String) .... This does a null check too, but this is not the test here... NN_NAKED_NOTIFY. I my opinion the description is not clear. A change of the state is not necessary. If I use new Object() to wait and notify on I don't change the object state. Or is state the lock-state? I don't get it. SP_SPIN_ON_FIELD. Can this really happen that a compiler will move this outside from a loop? This doesn't make sense to me because from outside a Thread can always change the values. And if the variable is volatile the JVM can't cache the value. So what's the meaning? That is the difference between STCAL_STATIC_CALENDAR_INSTANCE and STCAL_INVOKE_ON_STATIC_CALENDAR_INSTANCE or STCAL_INVOKE_ON_STATIC_DATE_FORMAT_INSTANCE/STCAL_STATIC_SIMPLE_DATE_FORMAT_INSTANCE? Why is XXXX.class in WL_USING_GETCLASS_RATHER_THAN_CLASS_LITERAL better then getClass()? A getClass() in a superclass called from the subclass will always return the Class object from the subclass which is good I think. What exactly does EQ_UNUSUAL do? It should check that the argument is of the same type of the class itself but it does't? Did you ever had problems with breaks? Is there real value with SF_SWITCH_FALLTHROUGH? Sounds to strong for me. No idea what TQ_EXPLICIT_UNKNOWN_SOURCE_VALUE_REACHES_ALWAYS_SINK and TQ_EXPLICIT_UNKNOWN_SOURCE_VALUE_REACHES_NEVER_SINK could be.

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • Gamepad Control for Processing + Android to Control Arduino Robot

    - by Iker
    I would like to create a Multitouch Gamepad control for Processing and use it to control a remote Arduino Robot. I would like to make the GUI on Processing and compile it for Android. Here is the GUI Gamepad for Processing I have created so far: float easing = 0.09; // start position int posX = 50; int posY = 200; // target position int targetX = 50; int targetY = 200; boolean dragging = false; void setup() { size(500,250); smooth(); } void draw() { background(255); if (!dragging) { // calculate the difference in position, apply easing and add to vx/vy float vx = (targetX - (posX)) * easing; float vy = (targetY - (posY)) * easing; // Add the velocity to the current position: make it move! posX += vx; posY += vy; } if(mousePressed) { dragging = true; posX = mouseX; posY = mouseY; } else { dragging = false; } DrawGamepad(); DrawButtons(); } void DrawGamepad() { //fill(0,155,155); //rect(0, 150, 100, 100, 15); ellipseMode(RADIUS); // Set ellipseMode to RADIUS fill(0,155,155); // Set fill to blue ellipse(50, 200, 50, 50); // Draw white ellipse using RADIUS mode ellipseMode(CENTER); // Set ellipseMode to CENTER fill(255); // Set fill to white// ellipse(posX, posY, 35, 35); // Draw gray ellipse using CENTER mode } void DrawButtons() { fill(0,155,155); // Set fill to blue ellipse(425, 225, 35, 35); ellipse(475, 225, 35, 35); fill(255,0,0); // Set fill to blue ellipse(425, 175, 35, 35); ellipse(475, 175, 35, 35); } I have realized that probably that code will not support Multitouch events on Android so I came up with another code found on this link Can Processing handle multi-touch? So the aim of this project is to create de multitouch gamepad to use to control my Arduino Robot. The gamepad should detect which key was pressed as well as the direction of the Joystick. Any help appreciated.

    Read the article

  • How to access a named element of a derived user control in silverlight ?

    - by Mrt
    Hello, I have a custom base user control in silverlight. <UserControl x:Class="Problemo.MyBaseControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <Border Name="HeaderControl" Background="Red" /> </Grid> </UserControl> With the following code behind public partial class MyBaseControl : UserControl { public UIElement Header { get; set; } public MyBaseControl() { InitializeComponent(); Loaded += MyBaseControl_Loaded; } void MyBaseControl_Loaded(object sender, RoutedEventArgs e) { HeaderControl.Child = Header; } } I have a derived control. <me:MyBaseControl x:Class="Problemo.MyControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:me="clr-namespace:Problemo" d:DesignHeight="300" d:DesignWidth="400"> <me:MyBaseControl.Header> <TextBlock Name="header" Text="{Binding Text}" /> </me:MyBaseControl.Header> </me:MyBaseControl> With the following code behind. public partial class MyControl : MyBaseControl { public string Text { get; set; } public MyControl(string text) { InitializeComponent(); Text = text; } } I'm trying to set the text value of the header textblock in the derived control. It would be nice to be able to set both ways, i.e. with databinding or in the derived control code behind, but neither work. With the data binding, it doesn't work. If I try in the code behind I get a null reference to 'header'. This is silverlight 4 (not sure if that makes a difference) Any suggestions on how to do with with both databinding and in code ? Cheers

    Read the article

  • .Net Entity Framework SaveChanges is adding without add method

    - by tmfkmoney
    I'm new to the entity framework and I'm really confused about how savechanges works. There's probably a lot of code in my example which could be improved, but here's the problem I'm having. The user enters a bunch of picks. I make sure the user hasn't already entered those picks. Then I add the picks to the database. var db = new myModel() var predictionArray = ticker.Substring(1).Split(','); // Get rid of the initial comma. var user = Membership.GetUser(); var userId = Convert.ToInt32(user.ProviderUserKey); // Get the member with all his predictions for today. var memberQuery = (from member in db.Members where member.user_id == userId select new { member, predictions = from p in member.Predictions where p.start_date == null select p }).First(); // Load all the company ids. foreach (var prediction in memberQuery.predictions) { prediction.CompanyReference.Load(); } var picks = from prediction in predictionArray let data = prediction.Split(':') let companyTicker = data[0] where !(from i in memberQuery.predictions select i.Company.ticker).Contains(companyTicker) select new Prediction { Member = memberQuery.member, Company = db.Companies.Where(c => c.ticker == companyTicker).First(), is_up = data[1] == "up", // This turns up and down into true and false. }; // Save the records to the database. // HERE'S THE PART I DON'T UNDERSTAND. // This saves the records, even though I don't have db.AddToPredictions(pick) foreach (var pick in picks) { db.SaveChanges(); } // This does not save records when the db.SaveChanges outside of a loop of picks. db.SaveChanges(); foreach (var pick in picks) { } // This saves records, but it will insert all the picks exactly once no matter how many picks you have. //The fact you're skipping a pick makes no difference in what gets inserted. var counter = 1; foreach (var pick in picks) { if (counter == 2) { db.SaveChanges(); } counter++; } There's obviously something going on with the context I don't understand. I'm guessing I've somehow loaded my new picks as pending changes, but even if that's true I don't understand I have to loop over them to save changes. Can someone explain this to me?

    Read the article

  • CellTable + AsyncListViewAdapter<T>, stuck at 'loading' when paging

    - by Jaroslav Záruba
    Hi I'm trying to make CellTable, AsyncListViewAdapter<T> and SimplePager<T> working together. I managed to display my data, but whenever I click either "go to start" or "go to end" button I get that 'loading' indicator which stays there till the end of days. ( AsyncListViewAdapter<T>.onRangeChanged doesn't even get called this time.) As I have only single row in my data those two buttons should be (and appear to be) disabled. I went through the code-snippets in this thread, but I can't see nothing suspicions in what I've been doing. Is there some obvious answer to a rookie mistake? I shrinked my variable names, hopefully it won't wrap too much. protected class MyAsyncAdapter extends AsyncListViewAdapter<DTO> { @Override protected void onRangeChanged(ListView<DTO> v) { // doesn't get called on go2start/go2end :( Range r = v.getRange(); fetchData(r.getStart(), r.getLength()); } } private void addTable() { // table: CellTable<DTO> table = new CellTable<DTO>(10); table.addColumn(new Column<DTO, String>(new TextCell()) { @Override public String getValue(DTO namespace) { return namespace.getName(); } }, "Name"); // pager: SimplePager<DTO> pager = new SimplePager<DTO>(table); table.setPager(pager); adapter = new MyAsyncAdapter(); adapter.addView(table); // does not make any difference: // adapter.updateDataSize(0, false); // adapter.updateDataSize(10, true); VerticalPanel vPanel = new VerticalPanel(); vPanel.add(table); vPanel.add(pager); RootLayoutPanel.get().add(vPanel); } // success-handler of my fetching AsyncCallback @Override public void onSuccess(List<DTO> data) { // AsyncCallback<List<DTO>> has start field adapter.updateViewData(start, data.size(), data); if(data.size() < length) adapter.updateDataSize(start + data.size(), true); } Regards J. Záruba

    Read the article

  • Reducer getting fewer records than expected

    - by sathishs
    We have a scenario of generating unique key for every single row in a file. we have a timestamp column but the are multiple rows available for a same timestamp in few scenarios. We decided unique values to be timestamp appended with their respective count as mentioned in the below program. Mapper will just emit the timestamp as key and the entire row as its value, and in reducer the key is generated. Problem is Map outputs about 236 rows, of which only 230 records are fed as an input for reducer which outputs the same 230 records. public class UniqueKeyGenerator extends Configured implements Tool { private static final String SEPERATOR = "\t"; private static final int TIME_INDEX = 10; private static final String COUNT_FORMAT_DIGITS = "%010d"; public static class Map extends Mapper<LongWritable, Text, Text, Text> { @Override protected void map(LongWritable key, Text row, Context context) throws IOException, InterruptedException { String input = row.toString(); String[] vals = input.split(SEPERATOR); if (vals != null && vals.length >= TIME_INDEX) { context.write(new Text(vals[TIME_INDEX - 1]), row); } } } public static class Reduce extends Reducer<Text, Text, NullWritable, Text> { @Override protected void reduce(Text eventTimeKey, Iterable<Text> timeGroupedRows, Context context) throws IOException, InterruptedException { int cnt = 1; final String eventTime = eventTimeKey.toString(); for (Text val : timeGroupedRows) { final String res = SEPERATOR.concat(getDate( Long.valueOf(eventTime)).concat( String.format(COUNT_FORMAT_DIGITS, cnt))); val.append(res.getBytes(), 0, res.length()); cnt++; context.write(NullWritable.get(), val); } } } public static String getDate(long time) { SimpleDateFormat utcSdf = new SimpleDateFormat("yyyyMMddhhmmss"); utcSdf.setTimeZone(TimeZone.getTimeZone("America/Los_Angeles")); return utcSdf.format(new Date(time)); } public int run(String[] args) throws Exception { conf(args); return 0; } public static void main(String[] args) throws Exception { conf(args); } private static void conf(String[] args) throws IOException, InterruptedException, ClassNotFoundException { Configuration conf = new Configuration(); Job job = new Job(conf, "uniquekeygen"); job.setJarByClass(UniqueKeyGenerator.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); // job.setNumReduceTasks(400); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } } It is consistent for higher no of lines and the difference is as huge as 208969 records for an input of 20855982 lines. what might be the reason for reduced inputs to reducer?

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • Hierarchy inheritance

    - by reito
    I had faced the problem. In my C++ hierarchy tree I have two branches for entities of difference nature, but same behavior - same interface. I created such hierarchy trees (first in image below). And now I want to work with Item or Base classes independetly of their nature (first or second). Then I create one abstract branch for this use. My mind build (second in image below). But it not working. Working scheme seems (third in image below). It's bad logic, I think... Do anybody have some ideas about such hierarchy inheritance? How make it more logical? More simple for understanding? Image Sorry for my english - russian internet didn't help:) Update: You ask me to be more explicit, and I will be. In my project (plugins for Adobe Framemaker) I need to work with dialogs and GUI controls. In some places I working with WinAPI controls, and some other places with FDK (internal Framemaker) controls, but I want to work throw same interface. I can't use one base class and inherite others from it, because all needed controls - is a hierarchy tree (not one class). So I have one hierarchy tree for WinAPI controls, one for FDK and one abstract tree to use anyone control. For example, there is an Edit control (WinEdit and FdkEdit realization), a Button control (WinButton and FdkButton realization) and base entity - Control (WinControl and FdkControl realization). For now I can link my classes in realization trees (Win and Fdk) with inheritence between each of them (WinControl is base class for WinButton and WinEdit; FdkControl is base class for FdkButton and FdkEdit). And I can link to abstract classes (Control is base class for WinControl and FdkControl; Edit is base class for WinEdit and FdkEdit; Button is base class for WinButton and FdkButton). But I can't link my abstract tree - compiler swears. In fact I have two hierarchy trees, that I want to inherite from another one. Update: I have done this quest! :) I used the virtual inheritence and get such scheme (http://img12.imageshack.us/img12/7782/99614779.png). Abstract tree has only absolute abstract methods. All inheritence in abstract tree are virtual. Link from realization tree to abstract are virtual. On image shown only one realization tree for simplicity. Thanks for help!

    Read the article

  • How Do I Prevent Rails From Treating Edit Fields_For Differently From New Fields_For

    - by James
    I am using rails3 beta3 and couchdb via couchrest. I am not using active record. I want to add multiple "Sections" to a "Guide" and add and remove sections dynamically via a little javascript. I have looked at all the screencasts by Ryan Bates and they have helped immensely. The only difference is that I want to save all the sections as an array of sections instead of individual sections. Basically like this: "sections" => [{"title" => "Foo1", "content" => "Bar1"}, {"title" => "Foo2", "content" => "Bar2"}] So, basically I need the params hash to look like that when the form is submitted. When I create my form I am doing the following: <%= form_for @guide, :url => { :action => "create" } do |f| %> <%= render :partial => 'section', :collection => @guide.sections %> <%= f.submit "Save" %> <% end %> And my section partial looks like this: <%= fields_for "sections[]", section do |guide_section_form| %> <%= guide_section_form.text_field :section_title %> <%= guide_section_form.text_area :content, :rows => 3 %> <% end %> Ok, so when I create the guide with sections, it is working perfectly as I would like. The params hash is giving me a sections array just like I would want. The problem comes when I want edit guide/sections and save them again because rails is inserting the id of the guide in the id and name of each form field, which is screwing up the params hash on form submission. Just to be clear, here is the raw form output for a new resource: <input type="text" size="30" name="sections[][section_title]" id="sections__section_title"> <textarea rows="3" name="sections[][content]" id="sections__content" cols="40"></textarea> And here is what it looks like when editing an existing resource: <input type="text" value="Foo1" size="30" name="sections[cd2f2759895b5ae6cb7946def0b321f1][section_title]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_section_title"> <textarea rows="3" name="sections[cd2f2759895b5ae6cb7946def0b321f1][content]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_content" cols="40">Bar1</textarea> How do I force rails to always use the new resource behavior and not automatically add the id to the name and value. Do I have to create a custom form builder? Is there some other trick I can do to prevent rails from putting the id of the guide in there? I have tried a bunch of stuff and nothing is working. Thanks in advance!

    Read the article

  • Dealing with personal failure

    - by codeelegance
    A while ago I was given the task of updating and extending the functionality of a software project. I was given a year to make the needed changes working solo. A month into development I came to the conclusion that it would take longer to change the existing product than to rewrite it from the ground up. I'd never attempted a complete rewrite so I talked with my boss about it and he was thrilled with the idea. I'm a fan of agile development but had never had the opportunity to take advantage of all of the prescribed practices so when I set to work I tried to incorporate as many as I could. I didn't have direct access to the customer and my coworkers (non-programmers) knew the business domain but were already so busy they didn't really have time to participate in design meetings so I resigned to working in the dark and occasionally calling one of them over to my desk to get feedback on my progress. I used TDD and refactored mercilessly and even tried taking a domain driven design approach. Things went well for a while. As the deadline came closer and the complexity of the project grew my productivity start slipping. I found myself cutting corners and ignoring the practices I had established as the pressure increased to meet the deadline. I also started working late nights and weekends to keep up with the load. In the end it made little difference how hard I worked. The project missed its deadline and what was completed wasn't enough to give to the customer. I had failed. Not only had I not finished on time but the previous version had sat untouched for almost a year so it wouldn't be of any help. Luckily we had another product that offered some of the same functionality. My boss decided to cancel the project entirely and moved all our orphaned customers to the other product. I spent weeks (along with everyone else at the company) manning the phones providing technical support for those customers. After it was all over, my boss was gracious enough not to fire me for nearly ruining the company. I was moved to the other product and have been trying to redeem myself ever since. Where did I go wrong? Has anyone else had to deal with this kind of defeat? How did you recover?

    Read the article

  • Serialization of non-required fields in protobuf-net

    - by David Hedlund
    I have a working java client that is communicating with Google, through ProtoBuf serialized messages. I am currently trying to translate that client into C#. I have a .proto file where the parameter appId is an optional string. Its default value in the C# representation as generated by the protobuf-net library is an empty string, just as it is in the java representation of the same file. message AppsRequest { optional AppType appType = 1; optional string query = 2; optional string categoryId = 3; optional string appId = 4; optional bool withExtendedInfo = 6; } I find that when I explicitly set appId to "" in the java client, the client stops working (403 Bad Request from Google). When I explicitly set appId to null in the java client, everything works, but only because hasAppId is being set to false (I'm uncertain as to how that affects the serialization). In the C# client, I always get 403 responses. I don't see any logic behind the distinction between not setting a value, and setting the default value, that seems to make all the difference in the java client. Since the output is always a binary stream, I am not sure if the successful java messages are being serialized with an empty string, or not serialized at all. In the C# client, I've tried setting IsRequired to true on the ProtoMember attribute, to force them to serialize, and I've tried setting the default value to null, and explicitly set "", so I'm quite sure I've tried some configuration where the value is being serialized. I've also played around with ProtoBuf.ProtoIgnore and at some point, removing the appId parameter altogether, but I haven't been able to avoid the 403 errors in C#. I've tried manually copying the serialized string from java, and that resolved my issues, so I'm certain that the rest of the HTTP Request is working, and the error can be traced to the serialized object. My serialization is simply this: var clone = ProtoBuf.Serializer.DeepClone(request); MemoryStream ms = new MemoryStream(2000); ProtoBuf.Serializer.Serialize(ms, clone); var bytearr = ms.ToArray(); string encodedData = Convert.ToBase64String(bytearr); I'll admit to not being quite sure about what DeepClone does. I've tried both with and without it...

    Read the article

  • ASP.NET Application Level vs. Session Level and Global.asax...confused

    - by contactmatt
    The following text is from the book I'm reading, 'MCTS Self-Paced Training Kit (Exam 70-515) Web Applications Development with ASP.NET 4". It gives the rundown of the Application Life Cycle. A user first makes a request for a page in your site. The request is routed to the processing pipeline, which forwards it to the ASP.NET runtime. The ASP.NET runtime creates an instance of the ApplicationManager class; this class instance represents the .NET framework domain that will be used to execute requests for your application. An application domain isolates global variables from other applications and allows each application to load and unload separately, as required. After the application domain has been created, an instance of the HostingEnvironment class is created. This class provides access to items inside the hosting environment, such as directory folders. ASP.NET creates instances of the core objects that will be used to process the request. This includes HttpContext, HttpRequest, and HttpResponse objects. ASP.NET creates an instance of the HttpApplication class (or an instance is reused). This class is also the base class for a site’s Global.asax file. You can use this class to trap events that happen when your application starts or stops. When ASP.NET creates an instance of HttpApplication, it also creates the modules configured for the application, such as the SessionStateModule. Finally, ASP.NET processes request through the HttpApplication pipleline. This pipeline also includes a set of events for validating requests, mapping URLs, accessing the cache, and more. The book then demonstrated an example of using the Global.asax file: <script runat="server"> void Application_Start(object sender, EventArgs e) { Application["UsersOnline"] = 0; } void Session_Start(object sender, EventArgs e) { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] + 1; Application.UnLock(); } void Session_End(object sender, EventArgs e) { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] - 1; Application.UnLock(); } </script> When does an application start? Whats the difference between session and application level? I'm rather confused on how this is managed. I thought that Application level classes "sat on top of" an AppDomain object, and the AppDomain contained information specific to that Session for that user. Could someone please explain how IIS manages Applicaiton level classes, and how an HttpApplication class sits under an AppDomain? Anything is appreciated.

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • Algorithm to split an article without breaking the reading flow or HTML code

    - by Victor Stanciu
    Hello, I have a very large database of articles, of varying lengths. The articles have HTML elements in them. I have to insert some ads (simple <script> elements) in the body of each article when it is displayed (I know, I hate ads that interrupt my reading too). Now, the problem is that each ad must be inserted at about the same position in each article. The simplest solution is to simply split the article on a fixed number of characters (without breaking words), and insert the ad code. This, however, runs the risk of inserting the ad in the middle of a HTML tag. I could go the regex way, but I was thinking about the following solution, using JS: Establish a character count threshold. For example, "the add should be inserted at about 200 words" Set accepted deviations in each direction, say -20, +20 characters. Loop through each text node inside the article, and while doing so, keep count of the total number of characters so far Once the count exceeds the threshold, make the following decision: 4.1. If count exceeds the threshold by a value lower that the positive accepted deviation (for example, 17 characters), insert the ad code just after the current text node. 4.2. If the count is greater than the sum of the threshold and the deviation, roll back to the previous text node, and make the same decision, only this time use the previous count and check if it's lower than the difference between the threshold and the deviation, and if not, insert the ad between the current node and the previous one. 4.3. If the 4.1 and 4.2 fail (which means that the previous node reached a too low character count and the current node a too high one), insert the ad after whatever character count is needed inside the current element. I know it's convoluted, but it's the first thing out of my mind and it has the advantage that, by trying to insert the ad between text nodes, perhaps it will not break the flow of the article as bad as it would if I would just stick it in (like the final 4.3 case) Here is some pseudo-code I put together, I don't trust my english-explaining skills: threshold = 200 deviation = 20 current_count = 0 for each node in article_nodes { previous_count = current_count current_count = current_count + node.length if current_count < threshold { continue // next interation } if current_count > threshold + deviation { if previous_count < threshdold - deviation { // insert ad in current node } else { // insert ad between the current and previous nodes } } else { // insert ad after the current node } break; } Am I over-complicating stuff, or am I missing a simpler, more elegant solution?

    Read the article

  • Flexslider position of previous and next slides

    - by TJ15
    I am using the basic flexslider, I wantto display some of the previous and next , so if slide 2 is showing you will see part of slide 1 to the left and part of slide 3 to the right. <!DOCTYPE html> <head> <link rel="stylesheet" href="flexslider.css" type="text/css"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script> <style> .container {overflow: hidden; width: 100%} .flexslider {max-width: 500px; width: 500px; margin: 0 auto} .content {background: #f2f2f2; max-width: 500px; display: block; margin: 0 auto} .flex-viewport {overflow: visible !important} </style> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <div class="container"> <div class="flexslider"> <ul class="slides"> <li><img src="785.jpg" /></li> <li><img src="785.jpg" /></li> <li><img src="785.jpg" /></li> <li><img src="785.jpg" /></li> </ul> </div> </div> <script src="jquery.flexslider.js"></script> <script> jQuery(document).ready(function($) { // You can use the locally-scoped $ in here as an alias to jQuery. $(window).load(function() { $('.flexslider').flexslider(); }); }); </script> </body> </html> I have reduced the image to 70% and positioned it in the middle of the page. I want to have the next and previous slides visible on either side of the main pic but no idea where to make the appropriate changes (I assume in the js file). I thought this was a margin issue but setting this to 0 in styles makes no difference. Has anyone done this and can provide some advice?

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >