Search Results

Search found 95201 results on 3809 pages for 'system data sqlite'.

Page 717/3809 | < Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >

  • Implicit theme error:The property 'Content' was not found in type 'System.Windows.Controls.Control'.

    - by Mark
    I have got an error while trying to upgrade our large project to SL4. I didn't write the original theme and my theme knowlege isn't great. In my demo app I have a Label and a LabelHeader(which i have created and is just a derived class from Label with DefaultStyleKey = typeof(LabelHeader); I am styling lthem like this: <Style TargetType="themeControls:LabelHeader"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <DataInput:Label FontSize="{TemplateBinding FontSize}" FontFamily="{TemplateBinding FontFamily}" Foreground="{TemplateBinding Foreground}" Content="{TemplateBinding Content}"/> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="FontFamily" Value="Tahoma"/> <Setter Property="FontSize" Value="20"/> <Setter Property="Foreground" Value="Red"/> </Style> This works in SL3 but in SL4 I get: Error: Unhandled Error in Silverlight Application Code: 2500 Category: ParserError Message: The property 'Content' was not found in type 'System.Windows.Controls.Control'. File: Line: 9 Position: 168 If I change this: Content="{TemplateBinding Content}" to Content="XXX" Then there is no error but , of course, I get XXX in my label rather than the content I set in XAML on the page Any ideas how I can get this working? Demo project here: http://walkersretreat.co.nz/files/ThemeIssue.zip (Apologies for reposting, I have so far got no answers over here: http://forums.silverlight.net/forums/p/183380/415930.aspx#415930)

    Read the article

  • What off-the-shelf licensing system will meet my needs?

    - by Anders Pedersen
    I'm looking for an off-the-shelf license system for desktop software. After some research on the net -- and of course here on StackOverflow -- I haven't found one the suits our needs. I have a couple of must-have features and some would-be-nice features: Must have: Encrypted unlock key Possibility to automate the unlock key generation on my website User info in key so that I can show name and company in an about box and perhaps in reports Nice to have: License managing tools Online activation Nice upgrade possibilities to a version with concurrent license model and subscription model I have looked at Manco, but I find them difficult to work with and the documentation is minimal. Further, I couldn't get the name in the key. Also, the automatic generation of a key on my website has to be done with an application web service, but I would rather program against a DLL. Next I looked at xheo. It is easier to use and the documentation is better, but the price is substantially higher and here you can only get the user name in the license file that you then have to provide together with the unlock key. Could anyone share their experiences on what you are using and how it is working for you?

    Read the article

  • Problem with copying local data onto HDFS on a Hadoop cluster using Amazon EC2/ S3.

    - by Deepak Konidena
    Hi, I have setup a Hadoop cluster containing 5 nodes on Amazon EC2. Now, when i login into the Master node and submit the following command bin/hadoop jar <program>.jar <arg1> <arg2> <path/to/input/file/on/S3> It throws the following errors (not at the same time.) The first error is thrown when i don't replace the slashes with '%2F' and the second is thrown when i replace them with '%2F': 1) Java.lang.IllegalArgumentException: Invalid hostname in URI S3://<ID>:<SECRETKEY>@<BUCKET>/<path-to-inputfile> 2) org.apache.hadoop.fs.S3.S3Exception: org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error Message: The request signature we calculated does not match the signature you provided. check your key and signing method. Note: 1)when i submitted jps to see what tasks were running on the Master, it just showed 1116 NameNode 1699 Jps 1180 JobTracker leaving DataNode and TaskTracker. 2)My Secret key contains two '/' (forward slashes). And i replace them with '%2F' in the S3 URI. PS: The program runs fine on EC2 when run on a single node. Its only when i launch a cluster, i run into issues related to copying data to/from S3 from/to HDFS. And, what does distcp do? Do i need to distribute the data even after i copy the data from S3 to HDFS?(I thought, HDFS took care of that internally) IF you could direct me to a link that explains running Map/reduce programs on a hadoop cluster using Amazon EC2/S3. That would be great. Regards, Deepak.

    Read the article

  • How to keep track of a private messaging system using MongoDB?

    - by luckytaxi
    Take facebook's private messaging system where you have to keep track of sender and receiver along w/ the message content. If I were using MySQL I would have multiple tables, but with MongoDB I'll try to avoid all that. I'm trying to come up with a "good" schema that can scale and is easy to maintain. If I were using mysql, I would have a separate table to reference the user and and message. See below ... profiles table user_id first_name last_name message table message_id message_body time_stamp user_message_ref table user_id (FK) message_id (FK) is_sender (boolean) With the schema listed above, I can query for any messages that "Bob" may have regardless if he's the recipient or sender. Now how to turn that into a schema that works with MongoDB. I'm thinking I'll have a separate collection to hold the messages. Problem is, how can I differentiate between the sender and the recipient? If Bob logs in, what do I query against? Depending on whether Bob initiated the email, I don't want to have to query against "sender" and "receiver" just to see if the message belongs to the user.

    Read the article

  • How to configure a system-wide package in osgi?

    - by cheng81
    I need to made available a library to some bundles. This library makes use of RMI, so it needs (as far as I know, at least) to use the system class loader in order to work (I tried to "osgi-fy" the library, which results in classcastexceptions at runtime). So what I did was to remove the dependencies from the bundles that use that library, compile them with the library included in the property jars.extra.classpath (in the build.properties of the eclipse project). Then I added org.osgi.framework.bootdelegation=com.blipsystems.* in the felix configuration file and started the felix container with the followin command line: java -classpath lib/blipnetapi.jar -jar bin/felix.jar ..which in turns throwed a NoClassDefFoundException for a class of the blipnetapi.jar library: ERROR: Error starting file:/home/frza/felix/load/BlipnetApiOsgiService_1.0.0.1.jar (org.osgi.framework.BundleException: Activator start error in bundle BlipnetApiOsgiService [30].) java.lang.NoClassDefFoundError: com/blipsystems/blipnet/api/util/BlipNetSecurityManager at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) at java.lang.Class.getConstructor0(Class.java:2699) at java.lang.Class.newInstance0(Class.java:326) at java.lang.Class.newInstance(Class.java:308) at org.apache.felix.framework.Felix.createBundleActivator(Felix.java:3525) at org.apache.felix.framework.Felix.activateBundle(Felix.java:1694) at org.apache.felix.framework.Felix.startBundle(Felix.java:1621) at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1076) at org.apache.felix.framework.StartLevelImpl.run(StartLevelImpl.java:264) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassNotFoundException: com.blipsystems.blipnet.api.util.BlipNetSecurityManager at org.apache.felix.framework.ModuleImpl.findClassOrResourceByDelegation(ModuleImpl.java:726) at org.apache.felix.framework.ModuleImpl.access$100(ModuleImpl.java:60) at org.apache.felix.framework.ModuleImpl$ModuleClassLoader.loadClass(ModuleImpl.java:1631) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319) ... 11 more So my question is: am I missing something? I did something wrong?

    Read the article

  • How can one setup a version control system on a local network, without a server?

    - by Andrew
    Edit: Ok so I learned that I guess I need an distributed source control, however are there any UI based ones, and do they allow you to merge with other users on the network? This is kind of a two part question, so here it goes. I want to start developing a web application at home (with multiple developers). However, I don't have a dedicated server nor want to pay for on. So first, I don't know which version control system to use for this case, as at work we mostly have TFS setup, so I am not to familiar with whats out there. What are the best free CVS/SVN tools out there? Second, is it possible to somehow setup the CVS/SVN where there is no dedicated server and both clients store up to one week of the source code from the last check-in? Also, it would be helpful if it could integrate with visual studio, again this isn't that important at all. Problem: There are Five users, one is a Server. Server Connected: All Ok Server Disconnected: No one can share. What I am looking for: No Server: Users still have versioning based on version id of last check-in. Users must check all version on network to make sure they aren't outdated based on their last version id. If not check-in, otherwise merge/get latest. If they are update checkin, and set current version id +1.

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • ImagickDraw::setFont has no effect on my system; suggestions?

    - by Dejay Clayton
    Running on Ubuntu LTS 8.04, PHP 5.2.8, ImageMagick 6.6.1-5 installed from source, and Imagick 2.3.0 installed from pecl. ImageMagick's 'convert' command sets fonts properly, and ImagickDraw::setFont properly complains if an invalid path to a font has been specified. However, specifying a valid font has no effect in changing the appearance of the drawn font whatsoever. Also, specifying a valid filename that contains invalid font data fails to trigger an error. It's almost as if Imagick isn't doing anything other than verifying that the specified file exists. Any suggestions?

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • How to loop through LI items in UL, get specific attributes, and pass them via $.ajax data

    - by Ahmed Fouad
    Here is my HTML code: <ul id="gallery"> <li id="attachment-32" class="featured"><a href="..." title=""><img src="..." alt="" /></a></li> <li id="attachment-34"><a href="..." title=""><img src="..." alt="" /></a></li> <li id="attachment-38"><a href="..." title=""><img src="..." alt="" /></a></li> <li id="attachment-64"><a href="..." title=""><img src="..." alt="" /></a></li> <li id="attachment-75"><a href="..." title=""><img src="..." alt="" /></a></li> <li></li> </ul> Here is my sample ajax request: $.ajax({ url: '/ajax/upload.php', type: 'POST', data: { ... }, success: function(data){ } }); Here is what I want to achieve. How to get the attachment number in ID attribute for every LI inside the gallery UL only when an ID attr is there and pass them via ajax data in this way: { attached : '32,34,38,64,75' } if there is a better way of doing this let me know I want to pass the list items which contain attachments to process. How to get the list item LI which has class of featured e.g. { featured_img : .. } and pass the attachment ID number if featured LI exists, and if none of list items is featured pass featured_img with 0. So i know how to process it via php in the request. Any help is appreciated. Thank you.

    Read the article

  • What version control system is best designed to *prevent* concurrent editing?

    - by Fred Hamilton
    We've been using CVS (with TortoiseCVS interface) for years for both source control and wide-ranging document control (including binaries such as Word, Excel, Framemaker, test data, simulation results, etc.). Unlike typical version control systems, 99% of the time we want to prevent concurrent editing - when a user starts editing a file, the pre-edit version of the file becomes read only to everyone else. Many of the people who will be using this are not programmers or even that computer savvy, so we're also looking for a system that let's people simply add documents to the repository, check out and edit a document (unless someone else is currently editing it), and check it back in with a minimum of fuss. We've gotten this to work reasonably well with CVS + TortoiseCVS, but we're now considering Subversion and Mercurial (and open to others if they're a better fit) for their better version tracking, so I was wondering which one supported locking files most transparently. For example, we'd like exclusive locking enabled as the default, and we want to make it as difficult as possible for someone to accidentally start editing a file that someone else has checked out. For example when someone checks out a file for editing, it checks with the master database first even if they have not recently updated their sandbox. Maybe it even won't let a user check out a document if it's off the network and can't check in with the mothership.

    Read the article

  • Can I perform some processing on the POST data before ASP.NET MVC UpdateModel happens?

    - by Domenic
    I would like to strip out non-numeric elements from the POST data before using UpdateModel to update the copy in the database. Is there a way to do this? // TODO: it appears I don't even use the parameter given at all, and all the magic // happens via UpdateModel and the "controller's current value provider"? [HttpPost] public ActionResult Index([Bind(Include="X1, X2")] Team model) // TODO: stupid magic strings { if (this.ModelState.IsValid) { TeamContainer context = new TeamContainer(); Team thisTeam = context.Teams.Single(t => t.TeamId == this.CurrentTeamId); // TODO HERE: apply StripWhitespace() to the data before using UpdateModel. // The data is currently somewhere in the "current value provider"? this.UpdateModel(thisTeam); context.SaveChanges(); this.RedirectToAction(c => c.Index()); } else { this.ModelState.AddModelError("", "Please enter two valid Xs."); } // If we got this far, something failed; redisplay the form. return this.View(model); } Sorry for the terseness, up all night working on this; hopefully my question is clear enough? Also sorry since this is kind of a newbie question that I might be able to get with a few hours of documentation-trawling, but I'm time-pressured... bleh.

    Read the article

  • What's the relationship between the Intel Atom Developer Program and the MeeGo operating system?

    - by Arne Evertsson
    I'm trying to understand the relationship between the Intel Atom Developer Program (IADP) and the new OS called MeeGo. IADP let's me create applications that run on both MeeGo as well as Windows devices, as long as the device is based on the Atom processor. The IADP apps are published in an app store called AppUp, which is very much like the Apple App Store. The MeeGo operating system merges Intel's Moblin and Nokia's Maemo into one OS. The purpose seems to be to make it possible to develop software that will run on Intel powered devices, Nokia-made devices, as well devices from other companies. Nokia has its Ovi Store that will support MeeGo apps. With its OS independent runtime, the question is what an IADP app really is? Is an IADP app a beast of its own, or is it just a MeeGo app that has been restricted to run only on Atom powered devices? Will it be possible to recompile my IADP app to run on all MeeGo devices? Sold in Ovi Store? Intel and Nokia have me really confused. Where should I go as a developer?

    Read the article

  • How can I write System preferences with Java? Can I invoke UAC?

    - by Jonas
    How can I write system preferences with Java, using Preferences.systemRoot()? I tried with: Preferences preferences = Preferences.systemRoot(); preferences.put("/myapplication/databasepath", pathToDatabase); But I got this error message: 2010-maj-29 19:02:50 java.util.prefs.WindowsPreferences openKey VARNING: Could not open windows registry node Software\JavaSoft\Prefs at root 0x80000002. Windows RegOpenKey(...) returned error code 5. Exception in thread "AWT-EventQueue-0" java.lang.SecurityException: Could not open windows registry node Software\JavaSoft\Prefs at root 0x80000002: Access denied at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.putSpi(Unknown Source) at java.util.prefs.AbstractPreferences.put(Unknown Source) at biz.accountia.pos.install.Setup$2.actionPerformed(Setup.java:43) I would like to do this, because I want to install an embedded JavaDB database, and let multiple users on the computer to use the same database with the application. How to solve this? Can I invoke UAC and do this as Administrator from Java? And if I log in as Administrator when writing, can I read the values with my Java application if I'm logged in as a User?

    Read the article

  • Caching page by parts; how to pass variables calculated in cached parts into never-cached parts?

    - by Kirzilla
    Hello, Let's imagine that I have a code like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); ob_start(); echo 'Here is the cached item id: '.$item_id; $data = ob_get_contents(); ob_end_clean(); $cache->save($data, "part1_cache_id"); } echo $data; echo never_cache_function($item_id); if (!$data_2 = $cache->load("part2_cache_id")) { ob_start(); echo 'Here is the another cached part of the page...'; $data_2 = ob_get_contents(); ob_end_clean(); $cache->save("part2_cache_id"); } echo $data_2; As far as you can see I need to pass $item_id variable into never_cache_function, but if fist part is cached (part1_cache_id) then I have no way to get $item_id value. I see the only solution - serialize all data from fist part (including $item_id value); then cache serialized string and unserialize it everytime when script is executed... Something like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); $data['item_id'] = $item_id; ob_start(); echo 'Here is the cached item id: '.$item_id; $data['html'] = ob_get_contents(); ob_end_clean(); $cache->save( serialize($data), "part1_cache_id" ); } $data = unserialize($data); echo $data['html'] echo never_cache_function($data['item_id']); Is there any other ways for doing such trick? I'm looking for the most high performance solution. Thank you UPDATED And another question is - how to implement such caching into controller without separating page into two templates? Is it possible? PS: Please, do not suggest Smarty, I'm really interested in implementing custom caching.

    Read the article

  • Inside what the TexBox value is posted back? ViewState or post back data?

    - by burak ozdogan
    In one article I was reading on ViewState, I saw a sentence saying that I should not fall into a mistake to believe that the value of a TextBox is stored in ViewState; it is stored in PostBack data. From here what I understand is when I post back a web form, the input controls values are stored in HTTP Request body. Not in the Viewstate. But as far as I know ViewState values are stored in an hidden field called __VIEWSTATE anyway. Then does it mean that __VIEVSTATE value is not posted in HTTP POST Request body as a postback data? Sounds nonesense to me. In another words, basically if I say the ViewState mechanism for such scenerio works like this, am I seeing it right or skipping something: You enter a value on an empty TextBox and submit the page The value of text box is posted back inside POST HTTP Request body. Nothing inside __VIEWSTATE at this point from the TextBox On the server side, the TextBox is created with the default value on OnInit method of the page The TrackChange property of ViewState is set to true. The posted back data of TextBox is loaded. Because it is different than the TextBox defalut value(because the user entered something), the ViewState of this text box is marked as DIRTY. The new value of the textbox is written into __VIEWSTATE hidden field From now on __VIEWSTATE hiddenfeild contains the last given value of the TextBox The page is sent to the user's browser having the __VIEWSTATE hidden field. But this time containing the last value entered by user which will be ready to be rendered Thanks guys! burak ozdogan

    Read the article

  • MySQL Query That Can Pull the Data I am Seeking?

    - by Amy
    On the project I am working on, I am stuck with the table structure from Hades. Two things to keep in mind: I can't change the table structure right now. I'm stuck with it for the time being. The queries are dynamically generated and not hard coded. So, while I am asking for a query that can pull this data, what I am really working toward is an algorithm that will generate the query I need. Hopefully, I can explain the problem without making your eyes glaze over and your brain implode. We have an instance table that looks (simplified) along these lines: Instances InstanceID active 1 Y 2 Y 3 Y 4 N 5 Y 6 Y Then, there are multiple data tables along these lines: Table1 InstanceID field1 reference_field2 1 John 5 2 Sally NULL 3 Fred 6 4 Joe NULL Table2 InstanceID field3 5 1 6 1 Table3 InstanceID fieldID field4 5 1 Howard 5 2 James 6 2 Betty Please note that reference_field2 in Table1 contains a reference to another instance. Field3 in Table2 is a bit more complicated. It contains a fieldID for Table 3. What I need is a query that will get me a list as follows: InstanceID field1 field4 1 John Howard 2 Sally 3 Fred The problem is, in the query I currently have, I do not get Fred because there is no entry in Table3 for fieldID 1 and InstanceID 6. So, the very best list I have been able to get thus far is InstanceID field1 field4 1 John Howard 2 Sally In essence, if there is an entry in Table1 for Field 2, and there is not an entry in Table 3 that has the instanceID contained in field2 and the field ID contained in field3, I don't get the data from field1. I have looked at joins till I'm blue in the face, and I can't see a way to handle the case when table3 has no entry.

    Read the article

  • How can I get the mapi system stub dll to pass extended mapi calls to my dll?

    - by Bogatyr
    For various reasons (questioning the reasons is not helpful to me), I'd like to implement my own extended mapi dll for windows xp. I have a skeleton dll now, just a few entrypoints exist for testing, but the system mapi stub (c:\windows\system32\mapi32.dll, I've checked that it's identical to mapistub.dll) will not pass through calls to my dll, while it happily passes the same calls through to MS Outlook's msmapi32.dll, (MAPIInitialize, MAPILoginEx are two such calls). There's some secret handshake between the stub and the extended mapi dll wherein the stub checks that "yup, it's an extended mapi dll": maybe it's the presence of some additional entrypoints I haven't implemented yet, maybe it's the return value from some function, I don't know. I've tried tracing a sample app I wrote that calls MAPIInitialize with STraceNT and ProcessMonitor but that didn't show anything obvious. Tracing has shown that indeed the stub loads my dll, but then finds the secret sauce is missing apparently, and returns an error code instead of calling my dll's function. What more could be needed for calling MAPIInitialize than the presence of MAPIInitialize in my dll's exports table? GetProcAddress says it's there. What I'd like to know is how to minimally extend my skeleton extended mapi dll so that the stub mapi dll will pass through extended mapi calls to my dll. What's the secret sauce? I'd rather not spend a painful week in msvc reverse engineering the stub behavior.

    Read the article

  • Can I make any ASP.NET/HTML element into form-data that posts back to the server?

    - by Giffyguy
    I am using Javascript to alter the innerHTML attribute of a <td> and I need to get that info back in the form submittal. The <td> corrosponds to an <asp:TableCell> on the server-side, where the Text attribute is set to an initial value. The user cannot enter the value in this particular field. Instead, its value is set by me (via client-side script) based on actions that the user performs. But this field is useless to me if I can't see its value on the server-side as well. I'd like to avoid using a read-only textbox, because those are difficult to resize dynamically. Can an <asp:Label> be used as form data? Is there any way to achive this without letting the user manually enter the data? Or is there a simpler way to store a string as a variable somewhere and send it back as form-data?

    Read the article

  • Parse Facebook API data using loop for getting fan page ID#s?

    - by Brandon Lee
    I've been learning how to parse json data returned from the facebook-api. I've figured out how to fetch fan pages from a specific profile id and want to parse them using a loop! Heres the code and example I have below: This is the data I get back from the facebook-api Array ( [0] => Array ( [page_id] => XXXXXX60828 ) [1] => Array ( [page_id] => XXXXXX0750 ) [2] => Array ( [page_id] => XXXXXX91225 ) [3] => Array ( [page_id] => XXXXXX1960343 ) [4] => Array ( [page_id] => XXXXXX60863 ) [5] => Array ( [page_id] => XXXXXX8582 ) ) I need to be able to put this data in a loop and extract the page_id#s out... still getting familiar with json and am having issues figuring this out? How can I get this in a loop using for each and strip out the page id#s?

    Read the article

  • How to populate data from .txt file into Excel in VBA?

    - by swei
    I'm trying to create something to read data from a .txt file, then populate data into .xls, but after open the .txt file, how do I get the data out? Basically I'm trying to get the the third column of the lines dated '04/06/2010'. After I open the .txt file, when I use ActiveSheet.Cells(row, col), the ActiveSheet is not pointing to .txt file. My .txt file is like this (space delimited): 04/05/10 23 29226 04/05/10 24 26942 04/06/10 1 23166 04/06/10 2 22072 04/06/10 3 21583 04/06/10 4 21390 Here is the code I have: Dim BidDate As Date BidDate = '4/6/2010' Workbooks.OpenText Filename:=ForecastFile, StartRow:=1, DataType:=xlDelimited, Space:=True If Err.Number = 1004 Then MsgBox ("The forecast file " & ForecastFile & " was not found.") Exit Sub End If On Error GoTo 0 Dim row As Integer, col As Integer row = 1 col = 1 cell_value = activeSheet.Cells(row, col) MsgBox ("the cell_value=" & cell_value) Do While (cell_value <> BidDate) And (cell_value <> "") row = row + 1 cell_value = activeSheet.Cells(row, col) ' MsgBox ("the value is " & cell_value) Loop If cell_value = "" Then MsgBox ("A load forecast for " & BidDate & " was not found in your current load forecast file titled '" + ForecastFile + ". " + "Make sure you have a load forecast for the current bid date and then open this spreadsheet again.") ActiveWindow.Close Exit Sub End If Can anyone point out where it goes wrong here?

    Read the article

  • How to deploy SQL Reporting 2005 when Data Sources are locked?

    - by spoulson
    The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files. However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me. When I attempt to deploy from VS, it gives me the error "The item '/Data Sources' already exists." I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct. I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?

    Read the article

  • Why do pure virtual base classes get direct access to static data members while derived instances do

    - by Shamster
    I've created a simple pair of classes. One is pure virtual with a static data member, and the other is derived from the base, as follows: #include <iostream> template <class T> class Base { public: Base (const T _member) { member = _member; } static T member; virtual void Print () const = 0; }; template <class T> T Base<T>::member; template <class T> void Base<T>::Print () const { std::cout << "Base: " << member << std::endl; } template <class T> class Derived : public Base<T> { public: Derived (const T _member) : Base<T>(_member) { } virtual void Print () const { std::cout << "Derived: " << this->member << std::endl; } }; I've found from this relationship that when I need access to the static data member in the base class, I can call it with direct access as if it were a regular, non-static class member. i.e. - the Base::Print() method does not require a this- modifier. However, the derived class does require the this-member indirect access syntax. I don't understand why this is. Both class methods are accessing the same static data, so why does the derived class need further specification? A simple call to test it is: int main () { Derived<double> dd (7.0); dd.Print(); return 0; } which prints the expected "Derived: 7"

    Read the article

  • How to customize data points on a Flex graph?

    - by Jess
    I have an area graph and I'm looking to have the data points to be shown. I have a CircleItemRenderer, but this shows all of the datapoints in the default stroke and fill. 1) How do I customize the display of my CircleItemRenderer? (instead of it having an orange fill, how can I change the color? 2) How can I decide to show the node for specific data points but not for others? For example, in my .XML file that imports the data for the graph, I may have a variable show_data_point which is true or false. Here's the current code I have: <mx:AreaSeries yField="numbers" form="segment" displayName="area graph" areaStroke = "{darkblue}" areaFill="{blue}" > <mx:itemRenderer> <mx:Component> <mx:CircleItemRenderer/> </mx:Component> </mx:itemRenderer> </mx:AreaSeries> </mx:series> Thanks a lot for your help!

    Read the article

< Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >