Search Results

Search found 9419 results on 377 pages for 'context sensitive grammar'.

Page 350/377 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • How to combine designable components with dependency injection

    - by Wim Coenen
    When creating a designable .NET component, you are required to provide a default constructor. From the IComponent documentation: To be a component, a class must implement the IComponent interface and provide a basic constructor that requires no parameters or a single parameter of type IContainer. This makes it impossible to do dependency injection via constructor arguments. (Extra constructors could be provided, but the designer would ignore them.) Some alternatives we're considering: Service Locator Don't use dependency injection, instead use the service locator pattern to acquire dependencies. This seems to be what IComponent.Site.GetService is for. I guess we could create a reusable ISite implementation (ConfigurableServiceLocator?) which can be configured with the necessary dependencies. But how does this work in a designer context? Dependency Injection via properties Inject dependencies via properties. Provide default instances if they are necessary to show the component in a designer. Document which properties need to be injected. Inject dependencies with an Initialize method This is much like injection via properties but it keeps the list of dependencies that need to be injected in one place. This way the list of required dependencies is documented implicitly, and the compiler will assists you with errors when the list changes. Any idea what the best practice is here? How do you do it? edit: I have removed "(e.g. a WinForms UserControl)" since I intended the question to be about components in general. Components are all about inversion of control (see section 8.3.1 of the UMLv2 specification) so I don't think that "you shouldn't inject any services" is a good answer. edit 2: It took some playing with WPF and the MVVM pattern to finally "get" Mark's answer. I see now that visual controls are indeed a special case. As for using non-visual components on designer surfaces, I think the .NET component model is fundamentally incompatible with dependency injection. It appears to be designed around the service locator pattern instead. Maybe this will start to change with the infrastructure that was added in .NET 4.0 in the System.ComponentModel.Composition namespace.

    Read the article

  • MS Access MSChart.Graph.8 not printing

    - by Tanj
    Software: Microsoft Access 2007 SP2 Database File Version: Access 2000 I have an access program that I inherited from a previous employee. It uses forms for reports and since I don't have much experience in access I have continued to do this. I have created a copy of the program for another project and modified it to suit. I am having trouble getting more then one chart to print. All the charts display in form view, they all have the same properties (excepting data, position, etc.) For some reason they are not printing. They don't even show up in the print preview. I am thinking it must be something with the graphs themselves as they sometimes lose all information. I have to open the graphs in edit mode and change the data source from column to row and back again so that it gets redrawn. (Refresh doesn't fix it) So right now I don't even have a clue as to where to look so ideas are welcome. Edit #1 It seems to be a problem with linking to an unbound form. Subform Field Linker: Can't build a link between unbound forms. The query for the main form is SELECT tTest.ixTest, tMotorTypes.ixMotorType, tMotorTypes.asMotorType, tMotorTypes.fDeprecated, tTestType.asTest, tTest.asSerialNum, tTest.asOrderNum, tTest.asFrameNum, tTest.asRotorNum, tTest.asOperator, tTest.iStation, tTest.dtTestDate, tTest.ixTestType FROM tMotorTypes INNER JOIN (tTestType INNER JOIN tTest ON tTestType.ixTestType=tTest.ixTestType) ON tMotorTypes.ixMotorType=tTest.ixMotorType; The query for the chart is: SELECT qGraphRSTTemperatures.Frequency, qGraphRSTTemperatures.[Drive End], qGraphRSTTemperatures.[Non Drive End], qGraphRSTTemperatures.[Air In], qGraphRSTTemperatures.Core FROM qGraphRSTTemperatures ORDER BY qGraphRSTTemperatures.ixTemperature; Query qGraphRSTTemperatures: SELECT tElectricalData.dblFrequency AS Frequency, tTemperatures.dblDrvEnd AS [Drive End], tTemperatures.dblNonDrvEnd AS [Non Drive End], tTemperatures.dblAirIn AS [Air In], tTemperatures.dblCore AS Core, tSubTest.ixTest, tTemperatures.ixTemperature FROM (tSubTest INNER JOIN tElectricalData ON tSubTest.ixSubTest = tElectricalData.ixSubTest) LEFT JOIN tTemperatures ON tElectricalData.ixElectrical = tTemperatures.ixElectrical WHERE (((tSubTest.ixSubTestType)=5)) ORDER BY tSubTest.ixTest, tTemperatures.ixTemperature; So how come, in the form view it shows the graph with the correct data when linked thus: Child field: ixTest Master field: ixTest but won't print the graph. The graph will print if I remove the links, but then I have all the data from chart query as it is not limited by ixTest. edit #2 It seems to be a data retrieval/rendering issue in printing. Is there anything in printing that changes the context of records with respect to parent/child relationships?

    Read the article

  • Image rescale and write rescaled image file in blackberry

    - by Karthick
    I am using the following code to resize and save the file in to the blackberry device. After image scale I try to write image file into device. But it gives the same data. (Height and width of the image are same).I have to make rescaled image file.Can anyone help me ??? class ResizeImage extends MainScreen implements FieldChangeListener { private String path="file:///SDCard/BlackBerry/pictures/test.jpg"; private ButtonField btn; ResizeImage() { btn=new ButtonField("Write File"); btn.setChangeListener(this); add(btn); } public void fieldChanged(Field field, int context) { if (field == btn) { try { InputStream inputStream = null; //Get File Connection FileConnection fileConnection = (FileConnection) Connector.open(path); if (fileConnection.exists()) { inputStream = fileConnection.openInputStream(); //byte data[]=inputStream.toString().getBytes(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); int j = 0; while((j=inputStream.read()) != -1) { baos.write(j); } byte data[] = baos.toByteArray(); inputStream.close(); fileConnection.close(); WriteFile("file:///SDCard/BlackBerry/pictures/org_Image.jpg",data); EncodedImage eImage = EncodedImage.createEncodedImage(data,0,data.length); int scaleFactorX = Fixed32.div(Fixed32.toFP(eImage.getWidth()), Fixed32.toFP(80)); int scaleFactorY = Fixed32.div(Fixed32.toFP(eImage.getHeight()), Fixed32.toFP(80)); eImage=eImage.scaleImage32(scaleFactorX, scaleFactorY); WriteFile("file:///SDCard/BlackBerry/pictures/resize.jpg",eImage.getData()); BitmapField bit=new BitmapField(eImage.getBitmap()); add(bit); } } catch(Exception e) { System.out.println("Exception is ==> "+e.getMessage()); } } } void WriteFile(String fileName,byte[] data) { FileConnection fconn = null; try { fconn = (FileConnection) Connector.open(fileName,Connector.READ_WRITE); } catch (IOException e) { System.out.print("Error opening file"); } if (fconn.exists()) try { fconn.delete(); } catch (IOException e) { System.out.print("Error deleting file"); } try { fconn.create(); } catch (IOException e) { System.out.print("Error creating file"); } OutputStream out = null; try { out = fconn.openOutputStream(); } catch (IOException e) { System.out.print("Error opening output stream"); } try { out.write(data); } catch (IOException e) { System.out.print("Error writing to output stream"); } try { fconn.close(); } catch (IOException e) { System.out.print("Error closing file"); } } }

    Read the article

  • With EJB 2.1, is declaring references to resources in ejb-jar.xml required?

    - by zwerd328
    I'm using Weblogic 9.2 with a lot of MDBs. These MDBs access JDBC DataSources and write to both locally and externally managed JMS Destinations using local and foreign XAConnectionFactorys, respectively. Each MDB demarcates a container-managed JTA transaction that should be distributed amongst all of these resources. Below is an excerpt from my ejb-jar.xml for an MDB that consumes from a local Queue called "MyDestination" and produces to an IBM Websphere MQ Queue called "MyOtherDestination". These logical names are linked to physical objects in my weblogic-ejb-jar.xml file. Is it required to use the <resource-ref> and <message-destination-ref> tags to expose the ConnectionFactory and Queue to the MDB? If so, is it required by Weblogic or is it required by the J2EE spec? And for what purpose? For example, is it required to support XA transactionality? I'm already aware of the benefit of decoupling the administered objects from my MDB using names exposed to the naming context of the MDB. Is this the only value added when specifying these tags? In other words, is it acceptable to just reference these objects from my MDB using the InitialContext and the objects' fully-qualified names? <enterprise-bean> <message-driven> <ejb-name>MyMDB</ejb-name> <ejb-class>com.mycompany.MyMessageDrivenBean</ejb-class> <transaction-type>Container</transaction-type> <message-destination-type>javax.jms.Queue</message-destination> <message-destination-link>MyDestination</message-destination-link> <resource-ref> <res-ref-name>jms/myQCF</res-ref-name> <res-type>javax.jms.XAConnectionFactory</res-type> <res-auth>Container</res-auth> </resource-ref> <message-destination-ref> <message-destination-ref-name>jms/myOtherDestination</message-destination-ref-name> <message-destination-type>javax.jms.Queue</message-destination-type> <message-destination-usage>Produces</message-destination-usage> <message-destination-link>MyOtherDestination</message-destination-link> </message-destination-ref> </message-driven> <enterprise-bean>

    Read the article

  • Container item implementation

    - by onurozcelik
    Hi, I am working in Train Traffic Controller software project. My responsibility in this project is to develop the visual railroad GUI. We are implementing the project with Qt. By now I am using QGraphicsLinearLayout to hold my items. I am using the layout because I do not want to calculate coordinates of each item. So far I wrote item classes to add the layout. For instance SwitchItem class symbolizes railroad switch in real world. Each item class is responsible for its own painting and events. So far so good. Now I need a composite item that can contain two or more item. This class is going to be responsible for painting the items contained in it. I need this class because I have to put two or more items inside same layout cell. If I don' t put them in same cell I can' t use layout. See the image below. BlockSegmentItem and SignalItem inside same cell. Here is my compositeitem implementation. #include "compositeitem.h" CompositeItem::CompositeItem(QString id,QList<FieldItem *> _children) { children = _children; } CompositeItem::~CompositeItem() { } QRectF CompositeItem::boundingRect() const { FieldItem *child; QRectF rect(0,0,0,0); foreach(child,children) { rect = rect.united(child->boundingRect()); } return rect; } void CompositeItem::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget ) { FieldItem *child; foreach(child,children) { child->paint(painter,option,widget); } } QSizeF CompositeItem::sizeHint(Qt::SizeHint which, const QSizeF &constraint) const { QSizeF itsSize(0,0); FieldItem *child; foreach(child,children) { // if its size empty set first child size to itsSize if(itsSize.isEmpty()) itsSize = child->sizeHint(Qt::PreferredSize); else { QSizeF childSize = child->sizeHint(Qt::PreferredSize); if(itsSize.width() < childSize.width()) itsSize.setWidth(childSize.width()); itsSize.setHeight(itsSize.height() + childSize.height()); } } return itsSize; } void CompositeItem::contextMenuEvent(QGraphicsSceneContextMenuEvent *event) { qDebug()<<"Test"; } This code works good with painting but when it comes to item events it is problematic. QGraphicsScene treats the composite item like a single item which is right for layout but not for events. Because each item has its own event implementation.(e.g. SignalItem has its special context menu event.) I have to handle item events seperately. Also I need a composite item implementation for the layout. How can I overcome this dilemma?

    Read the article

  • ProgressDialog does not disappear after executing dismiss, hide or cancel?

    - by Martin
    Hello, I have an Overlay extension which has 2 dialogs as private attributes - one Dialog and one ProgressDialog. After clicking on the Overlay in the MapView, the Dialog object appears. When the user clicks a button in the Dialog it disappears and the ProgressDialog is shown. Simultaneously a background task is started by notifying a running Service. When the task is done, a method (buildingLoaded) in the Overlay object is called to switch the View and to dismiss the ProgressDialog. The View is being switched, the code is being run (I checked with the debugger) but the ProgressDialog is not dismissed. I also tried hide() and cancel() methods, but nothing works. Can somebody help me? Android version is 2.2 Here is the code: public class LODOverlay extends Overlay implements OnClickListener { private Dialog overlayDialog; private ProgressDialog progressDialog; .............. @Override public void onClick(View view) { ....... final Context ctx = view.getContext(); this.progressDialog = new ProgressDialog(ctx); ListView lv = new ListView(ctx); ArrayAdapter<String> adapter = new ArrayAdapter<String>(ctx, R.layout.layerlist, names); lv.setAdapter(adapter); final LODOverlay obj = this; lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { String name = ((TextView) view).getText().toString(); Intent getFloorIntent = new Intent(Map.RENDERER); getFloorIntent.putExtra("type", "onGetBuildingLayer"); getFloorIntent.putExtra("id", name); view.getContext().sendBroadcast(getFloorIntent); overlayDialog.dismiss(); obj.waitingForLayer = name; progressDialog.show(ctx, "Loading...", "Wait!!!"); } }); ....... } public void buildingLoaded(String id) { if (null != this.progressDialog) { if (id.equals(this.waitingForLayer)) { this.progressDialog.hide(); this.progressDialog.dismiss(); ............ Map.flipper.showNext(); // changes the view } } } }

    Read the article

  • Retrieving/Updating Entity Framework POCO objects that already exist in the ObjectContext

    - by jslatts
    I have a project using Entity Framework 4.0 with POCOs (data is stored in SQL DB, lazyloading is enabled) as follows: public class ParentObject { public int ID {get; set;} public virtual List<ChildObject> children {get; set;} } public class ChildObject { public int ID {get; set;} public int ChildRoleID {get; set;} public int ParentID {get; set;} public virtual ParentObject Parent {get; set;} public virtual ChildRoleObject ChildRole {get; set;} } public class ChildRoleObject { public int ID {get; set;} public string Name {get; set;} public virtual List<ChildObject> children {get; set;} } I want to create a new ChildObject, assign it a role, then add it to an existing ParentObject. Afterwards, I want to send the new ChildObject to the caller. The code below works fine until it tries to get the object back from the database. The newChildObjectInstance only has the ChildRoleID set and does not contain a reference to the actual ChildRole object. I try and pull the new instance back out of the database in order to populate the ChildRole property. Unfortunately, in this case, instead of creating a new instance of ChildObject and assigning it to retreivedChildObject, EF finds the existing ChildObject in the context and returns the in-memory instance, with a null ChildRole property. public ChildObject CreateNewChild(int id, int roleID) { SomeObjectContext myRepository = new SomeObjectContext(); ParentObject parentObjectInstance = myRepository.GetParentObject(id); ChildObject newChildObjectInstance = new ChildObject() { ParentObject = parentObjectInstance, ParentID = parentObjectInstance.ID, ChildRoleID = roleID }; parentObjectInstance.children.Add(newChildObjectInstance); myRepository.Save(); ChildObject retreivedChildObject = myRepository.GetChildObject(newChildObjectInstance.ID); string assignedRoleName = retreivedChildObject.ChildRole.Name; //Throws exception, ChildRole is null return retreivedChildObject; } I have tried setting MergeOptions to Overwrite, calling ObjectContext.Refresh() and ObjectContext.DetectChanges() to no avail... I suspect this is related to the proxy objects that EF injects when working with POCOs. Has anyone run into this issue before? If so, what was the solution?

    Read the article

  • Scrolling RelativeLayout- white border over part of the content

    - by Tanis.7x
    I have a fairly simply Fragment that adds a handful of colored ImageViews to a RelativeLayout. There are more images than can fit on screen, so I implemented some custom scrolling. However, When I scroll around, I see that there is an approximately 90dp white border overlapping part of the content right where the edges of the screen are before I scroll. It is obvious that the ImageViews are still being created and drawn properly, but they are being covered up. How do I get rid of this? I have tried: Changing both the RelativeLayout and FrameLayout to WRAP_CONTENT, FILL_PARENT, MATCH_PARENT, and a few combinations of those. Setting the padding and margins of both layouts to 0dp. Example: Fragment: public class MyFrag extends Fragment implements OnTouchListener { int currentX; int currentY; RelativeLayout container; final int[] colors = {Color.BLACK, Color.RED, Color.BLUE}; @Override public View onCreateView(LayoutInflater inflater, ViewGroup fragContainer, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment_myfrag, null); } @Override public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); container = (RelativeLayout) getView().findViewById(R.id.container); container.setOnTouchListener(this); // Temp- Add a bunch of images to test scrolling for(int i=0; i<1500; i+=100) { for (int j=0; j<1500; j+=100) { int color = colors[(i+j)%3]; ImageView image = new ImageView(getActivity()); image.setScaleType(ImageView.ScaleType.CENTER); image.setBackgroundColor(color); LayoutParams lp = new RelativeLayout.LayoutParams(100, 100); lp.setMargins(i, j, 0, 0); image.setLayoutParams(lp); container.addView(image); } } } @Override public boolean onTouch(View v, MotionEvent event) { switch (event.getAction()) { case MotionEvent.ACTION_DOWN: { currentX = (int) event.getRawX(); currentY = (int) event.getRawY(); break; } case MotionEvent.ACTION_MOVE: { int x2 = (int) event.getRawX(); int y2 = (int) event.getRawY(); container.scrollBy(currentX - x2 , currentY - y2); currentX = x2; currentY = y2; break; } case MotionEvent.ACTION_UP: { break; } } return true; } } XML: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="fill_parent" android:layout_height="fill_parent" tools:context=".FloorPlanFrag"> <RelativeLayout android:id="@+id/container" android:layout_width="fill_parent" android:layout_height="fill_parent" /> </FrameLayout>

    Read the article

  • How do I create a Spring 3 + Tiles 2 webapp using REST-ful URLs?

    - by Ichiro Furusato
    I'm having a heck of a time resolving URLs with Spring 3.0 MVC. I'm just building a HelloWorld to try out how to build a RESTful webapp in Spring, nothing theoretically complicated. All of the examples I've been able to find are based on configurations that pay attention to file extensions ("*.htm" or "*.do"), include an artificial directory name prefix ("/foo") or even prefix paths with a dot (ugly), all approaches that use some artificial regex pattern as a signal to the resolver. For a REST approach I want to avoid all that muck and use only the natural URL patterns of my application. I would assume (perhaps incorrectly) that in web.xml I'd set a url-pattern of "/*" and pass everything to the DispatcherServlet for resolution, then just rely on URL patterns in my controller. I can't reliably get my resolver(s) to catch the URL patterns, and in all my trials this results in a resource not found error, a stack overflow (loop), or some kind of opaque Spring 3 ServletException stack trace — one of my ongoing frustrations with Spring generally is that the error messages are not often very helpful. I want to work with a Tiles 2 resolver. I've located my *.jsp files in WEB-INF/views/ and have a single line index.jsp file at the application root redirecting to the index file set by my layout.xml (the Tiles 2 Configurer). I do all the normal Spring 3 high-level configuration: <mvc:annotation-driven /> <mvc:view-controller path="/" view-name="index"/> <context:component-scan base-package="com.acme.web.controller" /> ...followed by all sorts of combinations and configurations of UrlBasedViewResolver, InternalResourceViewResolver, UrlFilenameViewController, etc. with all manner of variantions in my Tiles 2 configuration file. Then in my controller I've trying to pick up my URL patterns. Problem is, I can't reliably even get the resolver(s) to catch the patterns to send to my controller. This has now stretched to multiple days with no real progress on something I thought would be very simple to implement. I'm perhaps trying to do too much at once, though I would think this should be a simple (almost a default) configuration. I'm just trying to create a simple HelloWorld-type application, I wouldn't expect this is rocket science. Rather than me post my own configurations (which have ranged all over the map), does anyone know of an online example that: shows a simple Spring 3 MVC + Tiles 2 web application that uses REST-ful URLs (i.e., avoiding forced URL patterns such as file extensions, added directory names or dots) and relies solely on Spring 3 code/annotations (i.e., nothing outside of Spring MVC itself) to accomplish this? Is there an easy way to do this? Thanks very much for any help.

    Read the article

  • Question about DBD::CSB Statement-Functions

    - by sid_com
    From the SQL::Statement::Functions documentation: Function syntax When using SQL::Statement/SQL::Parser directly to parse SQL, functions (either built-in or user-defined) may occur anywhere in a SQL statement that values, column names, table names, or predicates may occur. When using the modules through a DBD or in any other context in which the SQL is both parsed and executed, functions can occur in the same places except that they can not occur in the column selection clause of a SELECT statement that contains a FROM clause. # valid for both parsing and executing SELECT MyFunc(args); SELECT * FROM MyFunc(args); SELECT * FROM x WHERE MyFuncs(args); SELECT * FROM x WHERE y < MyFuncs(args); # valid only for parsing (won't work from a DBD) SELECT MyFunc(args) FROM x WHERE y; Reading this I would expect that the first SELECT-statement of my example shouldn't work and the second should but it is quite the contrary. #!/usr/bin/env perl use warnings; use strict; use 5.010; use DBI; open my $fh, '>', 'test.csv' or die $!; say $fh "id,name"; say $fh "1,Brown"; say $fh "2,Smith"; say $fh "7,Smith"; say $fh "8,Green"; close $fh; my $dbh = DBI->connect ( 'dbi:CSV:', undef, undef, { RaiseError => 1, f_ext => '.csv', }); my $table = 'test'; say "\nSELECT 1"; my $sth = $dbh->prepare ( "SELECT MAX( id ) FROM $table WHERE name LIKE 'Smith'" ); $sth->execute (); $sth->dump_results(); say "\nSELECT 2"; $sth = $dbh->prepare ( "SELECT * FROM $table WHERE id = MAX( id )" ); $sth->execute (); $sth->dump_results(); outputs: SELECT 1 '7' 1 rows SELECT 2 Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2893. DBD::CSV::db prepare failed: Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2894. [for Statement "SELECT * FROM test WHERE id = MAX( id )"] at ./so_3.pl line 30. DBD::CSV::db prepare failed: Unknown function 'MAX' at /usr/lib/perl5/site_perl/5.10.0/SQL/Parser.pm line 2894. [for Statement "SELECT * FROM test WHERE id = MAX( id )"] at ./so_3.pl line 30. Could someone explaine me this behavior?

    Read the article

  • Windows splash screen using GDI+

    - by Luther
    The eventual aim of this is to have a splash screen in windows that uses transparency but that's not what I'm stuck on at the moment. In order to create a transparent window, I'm first trying to composite the splash screen and text on an off screen buffer using GDI+. At the moment I'm just trying to composite the buffer and display it in response to a 'WM_PAINT' message. This isn't working out at the moment; all I see is a black window. I imagine I've misunderstood something with regards to setting up render targets in GDI+ and then rendering them (I'm trying to render the screen using straight forward GDI blit) Anyway, here's the code so far: //my window initialisation code void MyWindow::create_hwnd(HINSTANCE instance, const SIZE &dim) { DWORD ex_style = WS_EX_LAYERED ; //eventually I'll be making use of this layerd flag m_hwnd = CreateWindowEx( ex_style, szFloatingWindowClass , L"", WS_POPUP , 0, 0, dim.cx, dim.cy, null, null, instance, null); SetWindowLongPtr(m_hwnd ,0, (__int3264)(LONG_PTR)this); m_display_dc = GetDC(NULL); //This was sanity check test code - just loading a standard HBITMAP and displaying it in WM_PAINT. It worked fine //HANDLE handle= LoadImage(NULL , L"c:\\test_image2.bmp", IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE); m_gdip_offscreen_bm = new Gdiplus::Bitmap(dim.cx, dim.cy); m_gdi_dc = Gdiplus::Graphics::FromImage(m_gdip_offscreen_bm);//new Gdiplus::Graphics(m_splash_dc );//window_dc ;m_splash_dc //this draws the conents of my splash screen - this works if I create a GDI+ context for the window, rather than for an offscreen bitmap. //For all I know, it might actually be working but when I try to display the contents on screen, it shows a black image draw_all(); //this is just to show that drawing something simple on the offscreen bit map seems to have no effect Gdiplus::Pen pen(Gdiplus::Color(255, 0, 0, 255)); m_gdi_dc->DrawLine(&pen, 0,0,100,100); DWORD last_error = GetLastError(); //returns '0' at this stage } And here's the snipit that handles the WM_PAINT message: ---8<----------------------- //Paint message snippit case WM_PAINT: { BITMAP bm; PAINTSTRUCT ps; HDC hdc = BeginPaint(vg->m_hwnd, &ps); //get the HWNDs DC HDC hdcMem = vg->m_gdi_dc->GetHDC(); //get the HDC from our offscreen GDI+ object unsigned int width = vg->m_gdip_offscreen_bm->GetWidth(); //width and height seem fine at this point unsigned int height = vg->m_gdip_offscreen_bm->GetHeight(); BitBlt(hdc, 0, 0, width, height, hdcMem, 0, 0, SRCCOPY); //this blits a black rectangle DWORD last_error = GetLastError(); //this was '0' vg->m_gdi_dc->ReleaseHDC(hdcMem); EndPaint(vg->m_hwnd, &ps); //end paint return 1; } ---8<----------------------- My apologies for the long post. Does anybody know what I'm not quite understanding regarding how you write to an offscreen buffer using GDI+ (or GDI for that matter)and then display this on screen? Thank you for reading.

    Read the article

  • CakePHP 1.3.4: EmailComponent error - Undefined property: EmailComponent::$Controller

    - by Kevin S.
    I'm trying to develop an invitation system for my website which runs on CakePHP 1.3.4. I am trying to use the built in EmailComponent to send an email. I'm getting this error (expanded): Notice (8): Undefined property: EmailComponent::$Controller [CORE/cake/libs/controller/components/email.php, line 428] Code | Context */ function _render($content) { $viewClass = $this-Controller-view; $content = array( "", "" ) EmailComponent::_render() - CORE/cake/libs/controller/components/email.php, line 428 EmailComponent::send() - CORE/cake/libs/controller/components/email.php, line 368 UsersController::send_quick_add_email() - APP/controllers/users_controller.php, line 77 UsersController::quick_add() - APP/controllers/users_controller.php, line 104 SinglesResultsController::quick_add() - APP/controllers/singles_results_controller.php, line 63 Dispatcher::_invoke() - CORE/cake/dispatcher.php, line 204 Dispatcher::dispatch() - CORE/cake/dispatcher.php, line 171 [main] - APP/webroot/index.php, line 83 I also get the following, which I can expand if necessary: Notice (8): Trying to get property of non-object [CORE/cake/libs/controller/components/email.php, line 428] Notice (8): Undefined property: EmailComponent::$Controller [CORE/cake/libs/controller/components/email.php, line 433] Notice (8): Trying to get property of non-object [CORE/cake/libs/controller/components/email.php, line 433] Notice (8): Undefined property: View::$webroot [CORE/cake/libs/view/view.php, line 805] Warning (2): Cannot modify header information - headers already sent by (output started at /Applications/MAMP/htdocs/cake/cake/libs/debugger.php:673) [CORE/cake/libs/controller/controller.php, line 746] I think that the EmailComponent object holds a reference to the controller it's being called from. I don't know why it's undefined in this case. Here is the code that fails (specifically, it errors on the call to $this-Email-send()): function send_quick_add_email($email) { if($email) { $this->Email->reset(); $this->Email->to = $email; $this->Email->subject = 'Some subject text'; $this->Email->from = '[email protected]'; $this->Email->template = 'email_template'; $this->set('user', $user); $this->set('token', $token); $this->Email->delivery = 'debug'; $this->Email->send(); } } Ok, for more clarification: The main data I am collecting on the site is results of a game played in meatspace. SinglesResultsController has an action, quick_add, which expects email addresses of people not already registered on the site. If the email addresses aren't associated with Users, UsersController::quick_add is called, which creates an inactive user, and sends an invitation email in UsersController::send_quick_add_email() I think the problem is related to the fact that the email isn't being sent in the first controller initialized (SinglesResultsController). Any thoughts on how to make it work? The Email component is declared at the top of both Controllers.

    Read the article

  • problem while displayin the texture image on view that works fine on iphone simulator but not on dev

    - by yunas
    hello i am trying to display an image on iphone by converting it into texture and then displaying it on the UIView. here is the code to load an image from an UIImage object - (void)loadImage:(UIImage *)image mipmap:(BOOL)mipmap texture:(uint32_t)texture { int width, height; CGImageRef cgImage; GLubyte *data; CGContextRef cgContext; CGColorSpaceRef colorSpace; GLenum err; if (image == nil) { NSLog(@"Failed to load"); return; } cgImage = [image CGImage]; width = CGImageGetWidth(cgImage); height = CGImageGetHeight(cgImage); colorSpace = CGColorSpaceCreateDeviceRGB(); // Malloc may be used instead of calloc if your cg image has dimensions equal to the dimensions of the cg bitmap context data = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte)); cgContext = CGBitmapContextCreate(data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast); if (cgContext != NULL) { // Set the blend mode to copy. We don't care about the previous contents. CGContextSetBlendMode(cgContext, kCGBlendModeCopy); CGContextDrawImage(cgContext, CGRectMake(0.0f, 0.0f, width, height), cgImage); glGenTextures(1, &(_textures[texture])); glBindTexture(GL_TEXTURE_2D, _textures[texture]); if (mipmap) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); else glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); if (mipmap) glGenerateMipmapOES(GL_TEXTURE_2D); err = glGetError(); if (err != GL_NO_ERROR) NSLog(@"Error uploading texture. glError: 0x%04X", err); CGContextRelease(cgContext); } free(data); CGColorSpaceRelease(colorSpace); } The problem that i currently am facing is this code workd perfectly fine and displays the image on simulator where as on the device as seen on debugger an error is displayed i.e. Error uploading texture. glError: 0x0501 any idea how to tackle this bug.... thnx in advance 4 ur soluitons

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • Android Couchdb - libcouch and IPC Aidl Services

    - by dirtySanchez
    I am working on a native CouchdDB app with android. Now just this week CouchOne released libcouch, described as "Library files needed to interact with CouchDB on Android": couchone_libcouch@Github It is a basic app that installs CouchDB if the CouchDB service (that comes with CouchDB if it was installed previously) can't be bound to. To be more precise, as I understand it: libcouch estimates CouchDb's presence on the device by trying to bind to a IPC Service from CouchDB and through that service wants communicate with CouchDB. Please see the method "attemptLaunch()" at CouchAppLauncher.class for reviewing this: public void attemptLaunch() { Log.i(TAG,"1.) called attemptLaunch"); Intent intent = new Intent(ICouchService.class.getName()); Log.i(TAG,"1.a) setup Intent"); Boolean canStart = bindService(intent, couchServiceConn, Context.BIND_AUTO_CREATE); Log.i(TAG,"1.b bound service. canStart: " + Boolean.toString(canStart)); if (!canStart) { setContentView(R.layout.install_couchdb); TextView label = (TextView) findViewById(R.id.install_couchdb_text); Button btn = (Button) this.findViewById(R.id.install_couchdb_btn); String text = getString(R.string.app_name) + " requires Apache CouchDB to be installed."; label.setText(text); // Launching the market will fail on emulators btn.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { launchMarket(); finish(); } }); } } The question(s) I have about this are: libcouch never is able to "find" a previously installed CouchDB. It always attempts to install CouchDB from the market. This is because it never actually is able to bind to the CouchDBService. As I understand the purpose auf AIDL generated service interfaces, the actual service that intends to offer it's IPC to other applications should make use of AIDL. In this case the AIDL has been moved to the application that is trying to bind to the remote service, which is libcouch in this case. Reviewing the commits the AIDL files have just been moved out of that repository to libcouch. For complete linkage, here's the link to the Android CouchDB sources: github.com/couchone/libcouch-android Now, I could be completely wrong in my findings, it could also be lincouch's Manifest that s missing something, but I am really looking forward to get some answers!

    Read the article

  • C++ Virtual Constructor, without clone()

    - by Julien L.
    I want to perform "deep copies" of an STL container of pointers to polymorphic classes. I know about the Prototype design pattern, implemented by means of the Virtual Ctor Idiom, as explained in the C++ FAQ Lite, Item 20.8. It is simple and straightforward: struct ABC // Abstract Base Class { virtual ~ABC() {} virtual ABC * clone() = 0; }; struct D1 : public ABC { virtual D1 * clone() { return new D1( *this ); } // Covariant Return Type }; A deep copy is then: for( i = 0; i < oldVector.size(); ++i ) newVector.push_back( oldVector[i]->clone() ); Drawbacks As Andrei Alexandrescu states it: The clone() implementation must follow the same pattern in all derived classes; in spite of its repetitive structure, there is no reasonable way to automate defining the clone() member function (beyond macros, that is). Moreover, clients of ABC can possibly do something bad. (I mean, nothing prevents clients to do something bad, so, it will happen.) Better design? My question is: is there another way to make an abstract base class clonable without requiring derived classes to write clone-related code? (Helper class? Templates?) Following is my context. Hopefully, it will help understanding my question. I am designing a class hierarchy to perform operations on a class Image: struct ImgOp { virtual ~ImgOp() {} bool run( Image & ) = 0; }; Image operations are user-defined: clients of the class hierarchy will implement their own classes derived from ImgOp: struct CheckImageSize : public ImgOp { std::size_t w, h; bool run( Image &i ) { return w==i.width() && h==i.height(); } }; struct CheckImageResolution; struct RotateImage; ... Multiple operations can be performed sequentially on an image: bool do_operations( std::vector< ImgOp* > v, Image &i ) { std::for_each( v.begin(), v.end(), /* bind2nd(mem_fun(&ImgOp::run), i ...) don't remember syntax */ ); } int main( ... ) { std::vector< ImgOp* > v; v.push_back( new CheckImageSize ); v.push_back( new CheckImageResolution ); v.push_back( new RotateImage ); Image i; do_operations( v, i ); } If there are multiple images, the set can be split and shared over several threads. To ensure "thread-safety", each thread must have its own copy of all operation objects contained in v -- v becomes a prototype to be deep copied in each thread.

    Read the article

  • JSF 2.0: Preserving component state across multiple views

    - by tlind
    The web application I am developing using MyFaces 2.0.3 / PrimeFaces 2.2RC2 is divided into a content and a navigation area. In the navigation area, which is included into multiple pages using templating (i.e. <ui:define>), there are some widgets (e.g. a navigation tree, collapsible panels etc.) of which I want to preserve the component state across views. For example, let's say I am on the home page. When I navigate to a product details page by clicking on a product in the navigation tree, my Java code triggers a redirect using navigationHandler.handleNavigation(context, null, "/detailspage.jsf?faces-redirect=true") Another way of getting to that details page would be by directly clicking on a product teaser that is shown on the home page. The corresponding <h:link> would lead us to the details page. In both cases, the expansion state of my navigation tree (a PrimeFaces tree component) and my collapsible panels is lost. I understand this is because the redirect / h:link results in the creation of a new view. What is the best way of dealing with this? I am already using MyFaces Orchestra in my project along with its conversation scope, but I am not sure if this is of any help here (since I'd have to bind the expansion/collapsed state of the widgets to a backing bean... but as far as I know, this is not possible). Is there a way of telling JSF which component states to propagate to the next view, assuming that the same component exists in that view? I guess I could need a pointer into the right direction here. Thanks! Update 1: I just tried binding the panels and the tree to a session-scoped bean, but this seems to have no effect. Also, I guess I would have to bind all child components (if any) manually, so this doesn't seem like the way to go. Update 2: Binding UI components to non-request scoped beans is not a good idea (see link I posted in a comment below). If there is no easier approach, I might have to proceed as follows: When a panel is collapsed or the tree is expanded, save the current state in a session-scoped backing bean (!= the UI component itself) The components' states are stored in a map. The map key is the component's (hopefully) unique, relative ID. I cannot use the whole absolute component path here, since the IDs of the parent naming containers might change if the view changes, assuming these IDs are generated programmatically. As soon as a new view gets constructed, retrieve the components' states from the map and apply them to the components. For example, in case of the panels, I can set the collapsed attribute to a value retrieved from my session-scoped backing bean.

    Read the article

  • .Net Entity Framework SaveChanges is adding without add method

    - by tmfkmoney
    I'm new to the entity framework and I'm really confused about how savechanges works. There's probably a lot of code in my example which could be improved, but here's the problem I'm having. The user enters a bunch of picks. I make sure the user hasn't already entered those picks. Then I add the picks to the database. var db = new myModel() var predictionArray = ticker.Substring(1).Split(','); // Get rid of the initial comma. var user = Membership.GetUser(); var userId = Convert.ToInt32(user.ProviderUserKey); // Get the member with all his predictions for today. var memberQuery = (from member in db.Members where member.user_id == userId select new { member, predictions = from p in member.Predictions where p.start_date == null select p }).First(); // Load all the company ids. foreach (var prediction in memberQuery.predictions) { prediction.CompanyReference.Load(); } var picks = from prediction in predictionArray let data = prediction.Split(':') let companyTicker = data[0] where !(from i in memberQuery.predictions select i.Company.ticker).Contains(companyTicker) select new Prediction { Member = memberQuery.member, Company = db.Companies.Where(c => c.ticker == companyTicker).First(), is_up = data[1] == "up", // This turns up and down into true and false. }; // Save the records to the database. // HERE'S THE PART I DON'T UNDERSTAND. // This saves the records, even though I don't have db.AddToPredictions(pick) foreach (var pick in picks) { db.SaveChanges(); } // This does not save records when the db.SaveChanges outside of a loop of picks. db.SaveChanges(); foreach (var pick in picks) { } // This saves records, but it will insert all the picks exactly once no matter how many picks you have. //The fact you're skipping a pick makes no difference in what gets inserted. var counter = 1; foreach (var pick in picks) { if (counter == 2) { db.SaveChanges(); } counter++; } There's obviously something going on with the context I don't understand. I'm guessing I've somehow loaded my new picks as pending changes, but even if that's true I don't understand I have to loop over them to save changes. Can someone explain this to me?

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • Android - Linkify Problem

    - by Ryan
    I seem to be having trouble with the linkify I am using in my Custom Adapter. For some reason I recieve the following stack track when I click on one of the links: 06-07 20:49:34.696: ERROR/AndroidRuntime(813): Uncaught handler: thread main exiting due to uncaught exception 06-07 20:49:34.745: ERROR/AndroidRuntime(813): android.util.AndroidRuntimeException: Calling startActivity() from outside of an Activity context requires the FLAG_ACTIVITY_NEW_TASK flag. Is this really what you want? 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.app.ApplicationContext.startActivity(ApplicationContext.java:550) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.content.ContextWrapper.startActivity(ContextWrapper.java:248) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.text.style.URLSpan.onClick(URLSpan.java:62) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.text.method.LinkMovementMethod.onTouchEvent(LinkMovementMethod.java:216) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.widget.TextView.onTouchEvent(TextView.java:6560) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.View.dispatchTouchEvent(View.java:3709) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:884) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:1659) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at com.android.internal.policy.impl.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1107) 06-07 20:49:34.745: ERROR/AndroidRuntime(813): at android.app.Activity.dispatchTouchEvent(Activity.java:2061) Here is the code that is calling it: TextView bot = new TextView( c ); bot.setText(li.getBottomText()); bot.setTextColor(Color.BLACK); bot.setTextSize(12); bot.setPadding(50, 35, 0, 10); Linkify.addLinks(bot, Linkify.ALL); rL.addView(bot,ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT); I understand what the error is saying but I am not sure how to fix it. Does anyone have any ideas? Thanks in advance for your help!

    Read the article

  • Using Constants in Perl

    - by David W.
    I am trying to define constants in Perl using the use Constant pragma: use Constant { FOO => "bar", BAR => "foo" }; I'm running into a bit of trouble, and hoping there's a standard way of handling it. First of all... I am defining a hook script for Subversion. To make things simple, I want to have a single file where the class (package) I'm using is in the same file as my actual script. Most of this package will have constants involved in it: print "This is my program"; package "MyClass"; use constant { FOO => "bar" }; sub new { yaddah, yaddah, yaddah. I would like my constant FOO to be accessible to my main program. I would like to do this without having to refer to it as MyClass::FOO. Normally, when the package is a separate file, I could do this in my main program: use MyClass qw(FOO); but, since my class and program are a single file, I can't do that. What would be the best way for my main program to be able to access my constants defined in my class? The second issue... I would like to use the constant values as hash keys: $myHash{FOO} = "bar"; The problem is that %myHash has the literal string FOO as the key and not the value of the constant. This causes problems when I do things like this: if (defined($myHash{FOO})) { print "Key " . FOO . " does exist!\n"; } I could force the context: if (defined("" . FOO . "")) { I could add parentheses: if (defined(FOO())) { Or, I could use a temporary variable: my $foo = FOO; if (defined($foo)) { None of these are really nice ways of handling this issue. So, what is the best way? Is there one way I'm missing? By the way, I don't want to use Readonly::Scalar because it is 1). slow, and 2). not part of the standard Perl package. I want to define my hook not to require additional Perl packages and to be as simple as possible to work.

    Read the article

  • Solr Facet Search spring-data-solr

    - by sv1
    I am new to Solr and we are using Spring-data for Solr I have a question may be its too simple but I am unable to comprehend. Basically I need to search on any field but I have a "zip" field as one of the facet fields. I have a Solr repository . (Am not sure if the annotations on the Repository are correct.) public interface MyRepository extends SolrCrudRepository<MyPOJO, String> { @Query(value = "*:*") @Facet(fields={"zip"}) public FacetPage<MyPOJO> findByQueryandAnno(String searchTerm,Pageable page); } In my Service class I am trying to call this methods by Injecting MyRepository like below public class MyService { @Inject MyRepository solrRepo; @Inject SolrTemplate solrTemplate; public FacetPage<MyPOJO> tryFacets(String searchString){ //Here is where I am struggling SimpleFacetQuery query = new SimpleQuery(new SimpleStringCriteria(searchString)); query.setFacetOptions(new FacetOptions("zip")); //Not sure how to get the Pageable object. //But the repository doesnt accept without it. return solrTemplate.queryForPage(query,{Pageable Instance to be passed here}) } From my jUnit I am loading context files needed for Solr and Spring //In jUnit the test method looks like this service.tryFacets("some value"); and it fails with - method not found in the service class at the call to the repository method. *********EDIT**************** As per ChristophStrobl's advice created a copyfield called multivaluedCopyField and the searchString argument works good. But the facet still isnt working...Now my code looks like this. I get the MyPOJO object as response but the facetcount and the faceted values are missing. public interface MyRepository extends SolrCrudRepository<MyPOJO, String> { @Query(value = "*:*") @Facet(fields={"zip"}) public FacetPage<MyPOJO> findByQueryandAnno(String searchTerm,Pageable page); } My service class looks like public class MyService { @Inject MyRepository solrRepo; public FacetPage<MyPOJO> tryFacets(String searchString){ //Here is where I am struggling SimpleFacetQuery query = new SimpleQuery(new SimpleStringCriteria(searchString)).setPageRequest new PageRequest(0,5)); query.setFacetOptions(new FacetOptions("zip")); FacetPage<MyPOJO> facetedPOJO= solrRepo.findByQueryandAnno(searchString,query.getPageRequest()); return facetedPOJO; } My jUnit method call is like service.tryFacets("some value");

    Read the article

  • Microsoft Word Document Controls not accepting carriage returns

    - by Scott
    So, I have a Microsoft Word 2007 Document with several Plain Text Format (I have tried Rich Text Format as well) controls which accept input via XML. For carriage returns, I had the string being passed through XML containing "\r\n" when I wanted a carriage return, but the word document ignored that and just kept wrapping things on the same line. I also tried replacing the \r\n with System.Environment.NewLine in my C# mapper, but that just put in \r\n anyway, which still didn't work. Note also that on the control itself I have set it to "Allow Carriage Returns (Multiple Paragrpahs)" in the control properties. This is the XML for the listMapper <Field id="32" name="32" fieldType="SimpleText"> <DataSelector path="/Data/DB/DebtProduct"> <InputField fieldType="" path="/Data/DB/Client/strClientFirm" link="" type=""/> <InputField fieldType="" path="strClientRefDebt" link="" type=""/> </DataSelector> <DataMapper formatString="{0} Account Number: {1}" name="SimpleListMapper" type=""> <MapperData> </MapperData> </DataMapper> </Field> Note that this is the listMapper C# where I actually map the list (notice where I try and append the system.environment.newline) namespace DocEngine.Core.DataMappers { public class CSimpleListMapper:CBaseDataMapper { public override void Fill(DocEngine.Core.Interfaces.Document.IControl control, CDataSelector dataSelector) { if (control != null && dataSelector != null) { ISimpleTextControl textControl = (ISimpleTextControl)control; IContent content = textControl.CreateContent(); CInputFieldCollection fileds = dataSelector.Read(Context); StringBuilder builder = new StringBuilder(); if (fileds != null) { foreach (List<string> lst in fileds) { if (CanMap(lst) == false) continue; if (builder.Length > 0 && lst[0].Length > 0) builder.Append(Environment.NewLine); if (string.IsNullOrEmpty(FormatString)) builder.Append(lst[0]); else builder.Append(string.Format(FormatString, lst.ToArray())); } content.Value = builder.ToString(); textControl.Content = content; applyRules(control, null); } } } } } Does anybody have any clue at all how I can get MS Word 2007 (docx) to quit ignoring my newline characters??

    Read the article

  • In a PHP project, how do you organize and access your helper objects?

    - by Pekka
    How do you organize and manage your helper objects like the database engine, user notification, error handling and so on in a PHP based, object oriented project? Say I have a large PHP CMS. The CMS is organized in various classes. A few examples: the database object user management an API to create/modify/delete items a messaging object to display messages to the end user a context handler that takes you to the right page a navigation bar class that shows buttons a logging object possibly, custom error handling etc. I am dealing with the eternal question, how to best make these objects accessible to each part of the system that needs it. my first apporach, many years ago was to have a $application global that contained initialized instances of these classes. global $application; $application->messageHandler->addMessage("Item successfully inserted"); I then changed over to the Singleton pattern and a factory function: $mh =&factory("messageHandler"); $mh->addMessage("Item successfully inserted"); but I'm not happy with that either. Unit tests and encapsulation become more and more important to me, and in my understanding the logic behind globals/singletons destroys the basic idea of OOP. Then there is of course the possibility of giving each object a number of pointers to the helper objects it needs, probably the very cleanest, resource-saving and testing-friendly way but I have doubts about the maintainability of this in the long run. Most PHP frameworks I have looked into use either the singleton pattern, or functions that access the initialized objects. Both fine approaches, but as I said I'm happy with neither. I would like to broaden my horizon on what is possible here and what others have done. I am looking for examples, additional ideas and pointers towards resources that discuss this from a long-term, real-world perspective. Also, I'm interested to hear about specialized, niche or plain weird approaches to the issue. Bounty I am following the popular vote in awarding the bounty, the answer which is probably also going to give me the most. Thank you for all your answers!

    Read the article

  • Can this way of storing typed objects be improved?

    - by Pindatjuh
    This is an "can it be improved"-question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 Windows platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical position when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignment. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 01010101 01010101 01010101 01010101 Data (user defined) Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 01010101 01010101 01010101 01010101 How can I improve this? (Both performance- and memory-consumption wise)

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >