Search Results

Search found 25886 results on 1036 pages for 'color key'.

Page 417/1036 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • Better type safety in Java collections

    - by Paul Tomblin
    In my java coding, I often end up with several Map<String,Map<String,foo>> or Map<String,List<String>> and then I have trouble remembering which String is which key. I comment the declaration with //Map<capabiltyId,Map<groupId,foo>> or //Map<groupId,List<capabilityId>, but it's not the greatest solution. If String wasn't final, I would make new classes CapabilityId extends String and GroupId extends String, but I can't. Is there a better way to keep track of which thing is the key and maybe have the compiler enforce it?

    Read the article

  • LINQ how to concatenate 2 db columns to display in dropdownlist

    - by Simke Nys
    I'm trying to concatenate product_name with product_prize_kg by using LINQ so I can display it as one field in a dropdownlist. When I try to do this I get the following error. value of type 'system.collections.generic.list(of anonymous type )' cannot be converted to ... My code is like this: Public Function selectAll() As List(Of tblProduct) Dim result = From product In dc.tblProducts Select New With { Key .productID = product.pk_product_id, Key .productNameKg = Convert.ToString(product.product_name) & " " & Convert.ToString(product.product_price_kg) } Return result.ToList() End Function This is the dropdownlist that I want to fill. <asp:DropDownList ID="DropDownList1" runat="server" DataSourceID="ObjectDataSource1" DataTextField="productNameKg" DataValueField="productID"> </asp:DropDownList> Thanks Grtz Simke

    Read the article

  • Binding a list of checkboxes from view to posted collection in ASP.NET MVC 2

    - by mare
    Given the code below within which I render a bunch of checkboxes in my view and the code for controller, someone please explain how can I get the values of the checkboxes (I need the key and the checked status) in the controller. <% foreach (string mappingId in Model.Mappings) {%> <tr><td> <%=mappingId %><br /> <%=Html.Label("Checkbox_" + mappingId, "Sync?")%> <%=Html.CheckBox("Checkbox_" + mappingId, true) %> </td></tr> <% } %> [HttpPost] public ActionResult Sync(FormCollection collection) { foreach (var posted in collection) { // here the "posted" variable shows up in the debugger as // "Checkbox_AD0D1" as Value (AD0D1 being the key in my model) and of type "object" // of course, this line fails but it shows what I want to do bool currentCheckbox = (bool) posted; } return View(); }

    Read the article

  • Sorting objects in Python

    - by Curious2learn
    I want to sort objects using by one of their attributes. As of now, I am doing it in the following way USpeople.sort(key=lambda person: person.utility[chosenCar],reverse=True) This works fine, but I have read that using operator.attrgetter() might be a faster way to achieve this sort. First, is this correct? Assuming that it is correct, how do I use operator.attrgetter() to achieve this sort? I tried, keyFunc=operator.attrgetter('utility[chosenCar]') USpeople.sort(key=keyFunc,reverse=True) However, I get an error saying that there is no attribute 'utility[chosenCar]'. The problem is that the attribute by which I want to sort is in a dictionary. For example, the utility attribute is in the following form: utility={chosenCar:25000,anotherCar:24000,yetAnotherCar:24500} I want to sort by the utility of the chosenCar using operator.attrgetter(). How could I do this? Thanks in advance.

    Read the article

  • GDB says that a KVO observer is registered even though it is not (or is it?).

    - by Paperflyer
    When my application is closed, the main controller class removes itself as Observer from the model and then releases the model. Like this: - (void)dealloc { [theModel removeObserver:self forKeyPath:@"myValue"]; [theModel release]; [super dealloc]; } And right after that, the debugger says: 2010-04-29 14:07:40.294 MyProgram[13678:a0f] An instance 0x116f2e880 of class TheModel was deallocated while key value observers were still registered with it. Observation info was leaked, and may even become mistakenly attached to some other object. Set a breakpoint on NSKVODeallocateBreak to stop here in the debugger. Here's the current observation info: <NSKeyValueObservationInfo 0x100288450> ( <NSKeyValueObservance 0x1002aca90: Observer: 0x116f40ec0, Key path: myValue, Options: <New: YES, Old: NO, Prior: NO> Context: 0x0, Property: 0x116f80430> ) where 0x116f2e880 is indeed the model and 0x116f40ec0 is indeed the controller. How can the controller still be an observer when it just removed itself as an observer?

    Read the article

  • can't persist jpa entity in app engine

    - by Bunny Rabbit
    public class Blobx { private String name; private BlobKey blobKey; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Key id; //getters and setters } public class Userx { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Key id; private String name; @OneToMany private List<Blobx> blobs; //getters and setters } while persiting the above Userx entity object i am encountering java.lang.IllegalStateException: Field "entities.Userx.blobs" contains a persistable object that isnt persistent, but the field doesnt allow cascade-persist!

    Read the article

  • python: sorting

    - by nabizan
    hi im doing a loop so i could get dict of data, but since its a dict it's sorting alphabetical and not as i push it trought the loop ... is it possible to somehow turn off alphabetical sorting? here is how do i do that data = {} for item in container: data[item] = {} ... for key, val in item_container.iteritems(): ... data[item][key] = val whitch give me something like this data = { A : { K1 : V1, K2 : V2, K3 : V3 }, B : { K1 : V1, K2 : V2, K3 : V3 }, C : { K1 : V1, K2 : V2, K3 : V3 } } and i want it to be as i was going throught the loop, e.g. data = { B : {K2 : V2, K3 : V3, K1 : V1}, A : {K1 : V1, K2 : V2, K3 : V3}, C : {K3 : V3, K1 : V1, K2 : V2} }

    Read the article

  • Iterating result of Select Query

    - by user294146
    Hi experts, I have a question related to select query. here i am explaining down below. i have a table with the following data **Column1(Primary Key) Column2 Column3** ------ --------- -------------- 1 C 2 C 3 Null 4 H 5 L 6 H my problem is i have to replace the value of Column3 with the corresponding value of Column1 for every occurrence of data "C", "H" and "L". Please provide me query related to this problem. how can i solve this using query or stored procedure. please elaborate the same. I need final select query result as follows **Column1(Primary Key) Column2 Column3** ------ --------- -------------- 1 C 1 2 C 2 3 Null 4 H 4 5 L 5 6 H 6 Thanks & Regards, Murali

    Read the article

  • Table-level diff and sync procedure for T-SQL

    - by Ville Koskinen
    I'm interested in T-SQL source code for synchronizing a table (or perhaps a subset of it) with data from another similar table. The two tables could contain any variables, for example I could have base table source table ========== ============ id val id val ---------- ------------ 0 1 0 3 1 2 1 2 2 3 3 4 or base table source table =================== ================== key val1 val2 key val1 val2 ------------------- ------------------ A 1 0 A 1 1 B 2 1 C 2 2 C 3 3 E 4 0 or any two tables containing similar columns with similar names. I'd like to be able to check that the two tables have matching columns: the source table has exactly the same columns as the base table and the datatypes match make a diff from the base table to the source table do the necessary updates, deletes and inserts to change the data in the base table to correspond the source table optionally limit the diff to a subset of the base table, preferrably with a stored procedure. Has anyone written a stored proc for this or could you point to a source?

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • Bind NameValueCollection to GridView?

    - by Xabatcha
    What kind of collection I should use to convert NameValue collection to be bindable to GridView? When doing directly it didn't work. Code in aspx.cs private void BindList(NameValueCollection nvpList) { resultGV.DataSource = list; resultGV.DataBind(); } Code in aspx <asp:GridView ID="resultGV" runat="server" AutoGenerateColumns="False" Width="100%"> <Columns> <asp:BoundField DataField="Key" HeaderText="Key" /> <asp:BoundField DataField="Value" HeaderText="Value" /> </Columns> </asp:GridView> Any tip most welcome. Thanks. X.

    Read the article

  • Unable to create index because of duplicate that doesn't exist?

    - by Alex Angas
    I'm getting an error running the following Transact-SQL command: CREATE UNIQUE NONCLUSTERED INDEX IX_TopicShortName ON DimMeasureTopic(TopicShortName) The error is: Msg 1505, Level 16, State 1, Line 1 The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.DimMeasureTopic' and the index name 'IX_TopicShortName'. The duplicate key value is (). When I run SELECT * FROM sys.indexes WHERE name = 'IX_TopicShortName' or SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[DimMeasureTopic]') the IX_TopicShortName index does not display. So there doesn't appear to be a duplicate. I have the same schema in another database and can create the index without issues there. Any ideas why it won't create here?

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Odd nested dictionary behavior in python

    - by adept
    Im new two python and am trying to grow a dictionary of dictionaries. I have done this in php and perl but python is behaving very differently. Im sure it makes sense to those more familiar with python. Here is my code: colnames = ['name','dob','id']; tablehashcopy = {}; tablehashcopy = dict.fromkeys(colnames,{}); tablehashcopy['name']['hi'] = 0; print(tablehashcopy); Output: {'dob': {'hi': 0}, 'name': {'hi': 0}, 'id': {'hi': 0}} The problem arises from the 2nd to last statement(i put the print in for convenience). I expected to find that one element has been added to the 'name' dictionary with the key 'hi' and the value 0. But this key,value pair has been added to EVERY sub-dictionary. Why? I have tested this on my ubuntu machine in both python 2.6 and python 3.1 the behaviour is the same.

    Read the article

  • Using the ASP.NET membership provider database with your own database?

    - by Shaharyar
    Hello everybody, We are developing an ASP.NET MVC Application that currently uses it's own databse ApplicationData for the domain models and another one Membership for the user management / membership provider. We do access restrictions using data-annotations in our controllers. [Authorize(Roles = "administrators, managers")] This worked great for simple use cases. As we are scaling our application our customer wants to restrict specific users to access specific areas of our ApplicationData database. Each of our products contains a foreign key referring to the region the product was assembled in. A user story would be: Users in the role NewYorkManagers should only be able to edit / see products that are assembled in New York. We created a placeholder table UserRightsRegions that contains the UserId and the RegionId. How can I link both the ApplicationData and the Membership databases in order to work properly / having cross-database-key-references? (Is something like this even possible?) All help is more than appreciated!

    Read the article

  • MS SQL and .NET Typed Dataset AllowDBNull Metadata

    - by Christian Pena
    Good afternoon, I am generating a typed dataset from a stored procedure. The stored procedure may contain something like: select t1.colA, t2.colA AS t2colA from t1 inner join t2 on t1.key = t2.key When I generate the typed dataset, the dataset knows whether t1.colA allows NULLs, but it always puts FALSE in AllowDBNull for t2.colA even if t2.colA allows NULL. Is this because the column is aliased? Is there any way, from SQL, to hint to VS that the column allows NULL? We currently have to go in and update the column's AllowDBNull if we regenerate the table. Thanks in advance. Christian

    Read the article

  • ACCESS VBA - DAO in VB - problem with creating relations

    - by Justin
    So take the following example: Sub CreateRelation() Dim db As Database Dim rel As Relation Dim fld As Field Set db = CurrentDb Set rel = db.CreateRelation("OrderID", "Orders", "Products") 'refrential integrity rel.Attributes = dbRelationUpdateCascade 'specify the key in the referenced table Set fld = rel.CreateField("OrderID") fld.ForeignName = "OrderID" rel.Fields.Append fld db.Relations.Append rel End Sub I keep getting the error, No unique index found for the referenced field of the primary table. if i include the vb before this sub to create in index on the field, it gives me the error: Index already exists. so i am trying to figure this out. if there are not any primary keys set, will that cause this not to work? i am confused by this, but i really really want to figure this out. So orderID is a FOREIGN KEY in the Products Table please help thanks justin

    Read the article

  • How to automize multiple projects build process by including digital signature of exe in Delphi?

    - by user193655
    After building a project group of 2 projects with Delphi (2009) I digitally sign the 2 exes using InstallAware Code signing, an exe that shipped with Delphi 2009. How is it possible to automize the digital signature, so when I build I can also attach digital signature. For digital signing I use a pvk (private key) file and an spc (Sw publisher certificate) file. Subquestion: Moreover I created a project group because I have 2 exes, but they are almost the same, the only thing that changes is the Application icon and the application name (one is ProductOne.dpr, the other is ProductTwo.dpr). In practice I have 2 brands of the same product, I have a single build but activation keys details activate one or the other, anyway now I was asked to change the icon and the filename, and for this I need to build 2 projects, activation key is not enough anymore to distinguish between the 2. Anyway if there is a way to do this from a single project it would be better.

    Read the article

  • Auto populate input based on file name with AngularJS

    - by LouieV
    I am playing around with AngularJS and have not been able to solve this problem. I have a view that has a form to upload a file to a node server. So far I have manage to do this using some directives and a service. I allow the user to send a custom name to the POST data if they desire. What I wan to accomplish is that when the user selects a file the filename models auto populates. My view looks like: <div> <input file-model="phpFile" type="file"> <input name="filename" type="text" ng-model="filename"> <button ng-click="send()">send</button> </div> file-model is my directive that allows the file to be assigned to a scope. myApp.directive('fileModel', ['$parse', function($parse) { return { restrict: 'A', link: function(scope, element, attrs) { var model = $parse.(attrs.fileModel); var modelSetter = model.assign; element.bind('change', function() { scope.$apply(function() { modelSetter(scope, element[0].files[0]); }); }); } }]); The service: myApp.service('fileUpload', ['$http', function($http){ this.uploadFileToUrl = function(file, uploadUrl, optionals) { var fd = new FormData(); fd.append('file', file); for (var key in file) { fd.append(key, file[key]); } for(var i = 0; i < optionals.length; i++){ fd.append(optionals[i].name, optionals[i].data); } }); }]); Here as you can see I pass the file, append its properties, and append any optional properties. In the controller is where I am having the troubles. I have tried $watch and using the file-model but I get the same error either way. myApp.controller('AddCtrl', function($scope, $location, PEberry, fileUpload){ //$scope.$watch(function() { // return $scope.phpFile; //},function(newValue, oldValue) { // $scope.filename = $scope.phpFile.name; //}, true); // if ($scope.phpFiles) { // $scope.filename = $scope.phpFiles.name; // } $scope.send = function() { var uploadUrl = "/files"; var file = $scope.phpFile; //var opts = [{ name: "uname", data: file.name }] fileUpload.uploadFileToUrl(file, uploadUrl); }; }); Thank you for your help!

    Read the article

  • Creating a MSI patch (.msp) by hand?

    - by Jerry Chong
    Our team has recently been considering pushing out a minor registry fix to users to modify one particular problematic key. Pretty straightforward stuff, just needed to update 1 key/value inside the registry. So at the moment, we are using Wix to build .msi installers for the product. While looking into Wix's support for generating .msp patch files, it seems that the only way to create an .msp is a somewhat overcomplicated multi-step process to: Get a copy of the original MSI, and compile a new copy of the fixed MSI Write a new Wix file that points to both installers Compile the Wix file into a .wixobj with Candle to a .psp Run Torch/Pyro over before/after snapshots of the original installers and the .psp, or alternatively using MsiMsp.exe Now my question is, can't I simply describe the registry change into a Wix file and directly compile it into the .msp, without step 1 and 4 - which is a huge amount of effort for just a simple change?

    Read the article

  • java keytool question

    - by user384706
    Hi, I created a java keystore programmatically of type jks (i.e. default type). It is initially empty so I created a DSA certificate. keytool -genkey -alias myCert -v -keystore trivial.keystore How can I see the public and private keys? I.e. is there a command that prints the private key of my certificate? I could only find keytool -certreq which in my understanding prints the certificate as a whole: -----BEGIN NEW CERTIFICATE REQUEST----- MIICaTCCAicCAQAwZTELMAkGA1UEBhMCR1IxDzANBgNVBAgTBkdyZWVjZTEPMA0GA1UEBxMGQXRo BQADLwAwLAIUQZbY/3Qq0G26fsBbWiHMbuVd3VICFE+gwtUauYiRbHh0caAtRj3qRTwl -----END NEW CERTIFICATE REQUEST----- I assume this is the whole certificate. How can I see private (or public key) via keytool? Thank you

    Read the article

  • How can I get a view of favorite user documents by user in Couchdb map/reduce?

    - by Jeremy Raymond
    My Couchdb database as a main document type that looks something like: { "_id" : "doc1", "type" : "main_doc", "title" : "the first doc" ... } There is another type of document that stores user information. I want users to be able to tag documents as favorites. Different users can save the same or different documents as favorites. My idea was to introduce a favorite document to track this something like: { "_id" : "fav1", "type" : "favorite", "user_id" : "user1", "doc_id" : "doc1" } It's easy enough to create a view with user_id as the key to get a list of their favorite doc IDs. E.g: function(doc) { if (doc.type == "favorite") { emit(doc.user_id, doc.doc_id); } } However I want to list of favorites to display the user_id, doc_id and title from the document. So output something like: { "key" : "user1", "value" : ["doc1", "the first doc"] }

    Read the article

  • Extending Object in Javasript

    - by smsteel
    I'm trying to extend Object functionality this way: Object.prototype.get_type = function() { if(this.constructor) { var r = /\W*function\s+([\w\$]+)\(/; var match = r.exec(this.constructor.toString()); return match ? match[1].toLowerCase() : undefined; } else { return typeof this; } } It's great, but there is a problem: var foo = { 'bar' : 'eggs' }; for(var key in foo) { alert(key); } There'll be 3 passages of cycle. Is there any way to avoid this?

    Read the article

  • SQL Reporting Services Daylight saving time query (pt 2)

    - by ross-starkey
    I posted a question a couple of days ago (SQL Reporting Services Daylight saving time query) which was I received an answer for (thanks very much) but did not elaborate on the whole problem I am experiencing. Not only did I require the returned date time format to account for day light saving but I also need the search parameter @StartDate to allow for DST. Currently if I key in a scheduled start time of 31/03/2010 11:00 and because the SQL DB has already taken the hours difference into consideration I get no results back. If I key in 31/03/2010 10:00 then the correct details are returned. Is there away using T-SQL or the like to get the search parameter to pass the adjusted time to the DB?

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >