Search Results

Search found 4369 results on 175 pages for 'merge tracking'.

Page 125/175 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Google Analytics - Events versus Pageviews, can you compare them?

    - by Aart Nicolai
    Hi, A mortgage page on our website is accesible by multiple links on one page. In order to determine which of the links on that same page has been clicked on, I use event tracking. All events for these links are being stored under the category: "mortgage" When I look at the total number of pageviews for this mortgage page and the total number of events for the event category "mortgage", the number of pageviews is 10%-20% higher. My questions are: Can I compare pageviews and events? If not, why not? Thanks, Aart Nicolai (funda.nl)

    Read the article

  • git merging changes to local branch

    - by ScottS
    Is it possible to merge changes from a central repo to a local branch without having to commit/stash the edits on the local branch and checkout master? If I am working on local branch "work" and there are some uncommited changes, I use the following steps to get updates from the central repo into my working branch. git stash git checkout master git pull git checkout work git rebase master git stash pop Usually there are no uncommitted changes in "work" and then I omit the stash steps. What I would really like is something like the following: git pull master (updates master while work branch is checked out and has changes) git rebase master (rebases the updates into work branch uncommited changes are still safe) Is there something easier than what I currently do?

    Read the article

  • Getting an odd error, MSSQL Query using `WITH` clause

    - by Aren B
    The following query: WITH CteProductLookup(ProductId, oid) AS ( SELECT p.ProductID, p.oid FROM [dbo].[ME_CatalogProducts] p ) SELECT rel.Name as RelationshipName, pl.ProductId as FromProductId, pl2.ProductId as ToProductId FROM ( [dbo].[ME_CatalogRelationships] rel INNER JOIN CteProductLookup pl ON pl.oid = rel.from_oid ) INNER JOIN CteProductLookup pl2 ON pl2.oid = rel.to_oid WHERE rel.Name = 'BundleItem' AND pl.ProductId = 'MX12345'; Is generating this error: Msg 319, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. On execution only. There are no errors/warnings in the sql statement in the managment studio. Any ideas?

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • C# how to wait for a webpage to finish loading before continuing

    - by MD6380
    I'm trying to create a program to clone multiple bugs at a time through the web interface of our defect tracking system. How can I wait before a page is completely loaded before I continue? //This is pseudo code, but this should give you an idea of what I'm trying to do. The //actual code uses multi-threading and all that good stuff :). foreach (string bug in bugs) { webBrowser.Navigate(new Uri(url)); webBrowser.Document.GetElementById("product").SetAttribute("value", product); webBrowser.Document.GetElementById("version").SetAttribute("value", version); webBrowser.Document.GetElementById("commit").InvokeMember("click"); //Need code to wait for page to load before continuing. }

    Read the article

  • ruby on rails group by with null values problem

    - by winter sun
    I have an hour table in witch I store user time tracking information, the table consists from the following cells project_id task_id (optional can be null) worker_id reported_date working_hours each worker enters sevral records per day so generally the table is looking like this id project_id worker_id task_id reported_date working hours; == =========== ========= ========= ============= ============== 1 1 1 1 10/10/2011 4 2 1 1 1 10/10/2011 14 3 1 1 10/10/2011 4 4 1 1 10/10/2011 14 the task_id is not a must field so there can be times when the user is not selecting it and ther task_id cell is empty now i need to display the data by using group by clouse so the resualt will be somthing like this project_id worker_id task_id working hours ========== ========= ========= ============== 1 1 1 18 1 1 18 I did the folowing group by condition @group_hours=Hour.group('project_id,worker_id,task_id)').select('project_id, task_id ,worker_id,sum(working_hours)as working_hours_sum') My view looks like this <% @group_hours.each do |b| % <%= b.project.name if b.project % <%= b.worker.First_name if b.worker % <%= b.task.name if b.task % <%= b.working_hours_sum % <%end% This it is working but only if the task_id is not null when task id is null it present all the records without grouping them like this project_id worker_id task_id working hours =========== ========= ========= ============== 1 1 1 18 1 1 4 1 1 14 I will appreciate any kind of solution to this problem

    Read the article

  • Can I use the CSS :visited pseudo class on 'wildcard' links?

    - by rabidpebble
    Let's say I have a site with multiple links as follows: www.example.com/product/1 www.example.com/product/2 www.example.com/product/3 I also append tracking info to links from time to time so that I can see how my site is being used, e.g, if somebody visits the products page from the product browser I would set a ref parameter: www.example.com/product/1&ref=pb www.example.com/product/2&ref=pb www.example.com/product/3&ref=pb The problem with this is that if the user visits a link of the first type and then views a link of the second type then the :visited pseudo class doesn't seem to apply because the browser only seems to match on exact URLs. Is there any way to have "wildcards" apply to links in this sense, so that when the user sees either the first type or the second type of link that it is highlighted? Note: I cannot change this "ref" architecture; it is inherited.

    Read the article

  • ASP.Net MVC elegant UI and ModelBinder authorization

    - by SDReyes
    We know authorization stuff is a cross cutting concern, and we do anything we could to avoid merge business logic in our views. But I still not found an elegant way to filter UI components (e.g. widgets, form elements, tables, etc) using the current user roles without contaminate the view with business logic. same applies for model binding. Example Form: Product Creation Fields: Name Price Discount Roles: Role Administrator Is allowed to see and modify the Name field Is allowed to see and modify the Price field Is allowed to see and modify the Discount Role Administrator assistant Is allowed to see and modify the Name Is allowed to see and modify the Price Fields shown in each role are different, and model binding needs to ignore the discount field for 'Administrator assistant' role. How would you do it?

    Read the article

  • Extending configuration for .Net 3.5 Applications

    - by Maximiliano Rios
    Due to a requirement in my current project, I have to build a configuration manager to handle configurations that merge local config info with database one. Custom configuration doesn't fit my needs, problem is that I don't know what's the type before loading certain information, for example: Loading database information I will able to know what's myhandler's type. Not previously. So I thought to write my own handler but I can't let set blank as type for sections, in fact .net requires to know what's the type to match myhandler nodes. I'm thinking on building a different parser to read XML nodes but I would prefer to match this structure. I've not found any information to do that yet, is there any way? Can I extend or hook up something into the framework to be capable of loading on-the-fly types and validate nodes? Thanks in advance.

    Read the article

  • Extract page from PDF using iText and clojure

    - by KobbyPemson
    I am trying to extract a single page from a pdf with clojure by translating the splitPDF method I found here http://viralpatel.net/blogs/itext-tutorial-merge-split-pdf-files-using-itext-jar/ I keep getting this error IOException Stream Closed java.io.FileOutputStream.writeBytes (:-2) This prevents me from opening the document while the repl is still open. Once I close the repl I'm able to access the document. Why do I get the error? How do I fix it ? How can I make it more clojurey? (import '(com.itextpdf.text Document) '(com.itextpdf.text.pdf PdfReader PdfWriter PdfContentByte PdfImportedPage BaseFont) '(java.io File FileInputStream FileOutputStream InputStream OutputStream)) (defn extract-page [src dest pagenum] (with-open [ d (Document.) os (FileOutputStream. dest)] (let [ srcpdf (->> src FileInputStream. PdfReader.) destpdf (PdfWriter/getInstance d os)] (doto d (.open ) (.newPage )) (.addTemplate (.getDirectContent destpdf) (.getImportedPage destpdf srcpdf pagenum) 0 0))))

    Read the article

  • Change the current branch to master in git

    - by Karel Bílek
    I have a repository in git. I made a branch, then did some changes both to the master and to the branch. Then, tens of commits later, I realized the branch is in much better state than the master, so I want the branch to "become" the master and disregard the changes on master. I cannot merge it, because I don't want to keep the changes on master. What should I do? (this will very possibly be a duplicate question, since it is trivial, but I have not found it here)

    Read the article

  • MySQL Export with Column Heading

    - by st4nt0n
    Hello - I am very, very, new to mySQL. I've got experience in general technical terms, but not with the syntax or concepts of mySQL. I have been tasked with exporting a table from MySQL into a pipe delimited .txt or .xls that I can use to add 7500 more records to manually, then import back into the table. I tried to use INTO OUTFILE, but I don't get column headings, which I need for reference to merge the new records. Is there a good resource that can explain this to a complete novice? I would usually go down to my bookstore and start learning, but I'm on a bit of a time crunch. Thanks all!

    Read the article

  • Is there any reason to use TFS 2010 in a micro ISV?

    - by kyrisu
    Yesterday I was checking VS2010 editions here and I've noticed that with VS10 with MSDN we get TFS2010 with 1 CAL. I'm a micro ISV (basically sole developer, many clients). I just want to save time - did anyone tried it in similar scenario? Are there any features worth looking into for such a small implementation? P.S. Right now I'm using GIT with gitextension - I'm happy with it, but I would like something more integrated with project management and bug tracking so I can show it to my clients when I'm working on their projects.

    Read the article

  • How to generate makefile targets from variables?

    - by Ketil
    I currently have a makefile to process some data. The makefile gets the inputs to the data processing by sourcing a CONFIG file, which defines the input data in a variable. Currently, I symlink the input files to a local directory, i.e. the makefile contains: tmp/%.txt: tmp ln -fs $(shell echo $(INPUTS) | tr ' ' '\n' | grep $(patsubst tmp/%,%,$@)) $@ This is not terribly elegant, but appears to work. Is there a better way? Basically, given INPUTS = /foo/bar.txt /zot/snarf.txt I would like to be able to have e.g. %.out: %.txt some command As well as targets to merge results depending on all $(INPUT) files. Also, apart from the kludgosity, the makefile doesn't work correctly with -j, something that is crucial for the analysis to complete in reasonable time. I guess that's a bug in GNU make, but any hints welcome.

    Read the article

  • What 'best practices' exist for handing enum heirarchies?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing enum heirarchies. I'm working through some docs on Entity Framework 4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • Using set in Python inside a loop

    - by user210481
    I have the following list in Python: [[1, 2], [3, 4], [4, 6], [2, 7], [3, 9]] I want to group them into [[1,2,7],[3,4,6,7]] My code to do this looks like this: l=[[1, 2], [3, 4], [4, 6], [2, 7], [3, 9]] lf=[] for li in l: for lfi in lf: if lfi.intersection(set(li)): lfi=lfi.union(set(li)) break else: lf.append(set(li)) lf is my final list. I do a loop over l and lf and when I find an intersection between an element from l and another from lf, I would like to merge them (union) But I can't figure out why this is not working. The first to elements of the list l are being inserted with the append command, but the union is not working. My final list lf looks like [set([1, 2]), set([3, 4])] It seems to be something pretty basic, but I'm not familiar with sets. I appreciate any help Thanks

    Read the article

  • Types in Python - Google Appengine

    - by Chris M
    Getting a bit peeved now; I have a model and a class thats just storing a get request in the database; basic tracking. class SearchRec(db.Model): WebSite = db.StringProperty()#required=True WebPage = db.StringProperty() CountryNM = db.StringProperty() PrefMailing = db.BooleanProperty() DateStamp = db.DateTimeProperty(auto_now_add=True) IP = db.StringProperty() class AddSearch(webapp.RequestHandler): def get(self): searchRec = SearchRec() searchRec.WebSite = self.request.get('WEBSITE') searchRec.WebPage = self.request.get('WEBPAGE') searchRec.CountryNM = self.request.get('COUNTRY') searchRec.PrefMailing = bool(self.request.get('MAIL')) searchRec.IP = self.request.get('IP') Bool has my biscuit; I thought that setting bool(self.reque....) would set the type of the string but no matter what I pass it it still stores it as TRUE in the database. I had the same issue with using required=True on strings for the model; the damn thing kept saying that nothing was being passed... but it had. Ta

    Read the article

  • Can ActionScript tell when a SWF was published?

    - by spiralganglion
    I'd like to write a little class that adds a Day/Month box showing the date a SWF was published from Flash. The company I work for regularly produces many, many SWFs and many versions of each, iterating over the course of months. A version-tracking system we've been using to communicate with our clients is a Day/Month date-box that gives the date the SWF was published. Up until now, we've been filling in the publish date by hand. If there's any way I can do this programatically with ActionScript that'd be fantastic. Any insight? Basically, all I need is the call that gives me the publish date, or even.. anything about the circumstances under which a SWF was published that I could use to roll into some form of.. automated version identification, unique to this SWF. So, can ActionScript tell when a SWF was published?

    Read the article

  • jQuery: 'async: false' Not Working With IE7 / IE6

    - by Norbert
    I created a simple tracking script which adds the users info to a database when the page is unloaded. It works on all browsers except IE7 and IE6. IE7 gives me errors, but I can't open the "debugger" because I'm using the standalone version (or at least that's what I think the problems is). I removed the async: false, from the script below and I didn't get any errors, but I need async set to false in order for the script to work. Any ideas? $(window).unload(function() { $.ajax({ type: "POST", async: false, url: "add.php", data: "ip=" + jIp + "&date=" + jDate + "&time=" + jTime, }); }); Update: I got IE7 to display the error, kinda. When I click OK on the dialog on top, it closes both dialogs. Ugh!

    Read the article

  • Setting multiple datasource for a report in Asp.net

    - by Nandini
    Hi all, I have a report whose data is derived from two stored procedures.so i need to set these two datasources for generating the report.But the reports which have only one SP, ie.only one datasource works properly. For setting the datasource, i wrote code like this: dim reportdocument as ReportDocument Dim reportPath As String = Server.MapPath("CrystalRpts\Report.rpt") ReportDocument.Load(reportPath) 'Function for Setting the Connection SetDBLogonForReport(MyConnectionInfo, ReportDocument) dim dt1 as datatable=Datasource1 dim dt2 as datatable=Datasource2 dt1.merge(dt2) reportdocument.setDataSource(dt1) CrystalReportViewer.ReportSource=reportdocument But, the report is not generating.it shows the following error The Report requires additional information Servername:- Server Database:- Database UserID:- Password:- But the reports which have only one SP, ie.only one datasource works properly.What colud be reason for this error?

    Read the article

  • How to apply coding methodologies and practices to non-coding work?

    - by Dan
    I can talk for hours about best-practice, source control, change management, feature tracking, development cycles and the lot, but most of what I've learnt or read seems to apply to nuts-and-bolts programming of compiled applications. You know, ASCII files that gets turned into 1s and 0s. How does one apply the same discipline and wisdom to working in environments that are point-and-click, config-centric. I'm thinking of CMSs and specifically, my current 9 to 5, SharePoint. Traditional practices of source control, dev-staging-production seem to break down since we're not working with code, and the live environment changes with user input. So to sum up a rather lengthy question, what works in a no-code environment?

    Read the article

  • creating a color coded time chart using colorbar and colormaps in python

    - by Rusty
    I'm trying to make a time tracking chart based on a daily time tracking file that I used. I wrote code that crawls through my files and generates a few lists. endTimes is a list of times that a particular activity ends in minutes going from 0 at midnight the first day of the month to however many minutes are in a month. labels is a list of labels for the times listed in endTimes. It is one shorter than endtimes since the trackers don't have any data about before 0 minute. Most labels are repeats. categories contains every unique value of labels in order of how well I regard that time. I want to create a colorbar or a stack of colorbars (1 for eachday) that will depict how I spend my time for a month and put a color associated with each label. Each value in categories will have a color associated. More blue for more good. More red for more bad. It is already in order for the jet colormap to be right, but I need to get desecrate color values evenly spaced out for each value in categories. Then I figure the next step would be to convert that to a listed colormap to use for the colorbar based on how the labels associated with the categories. I think this is the right way to do it, but I am not sure. I am not sure how to associate the labels with color values. Here is the last part of my code so far. I found one function to make a discrete colormaps. It does, but it isn't what I am looking for and I am not sure what is happening. Thanks for the help! # now I need to develop the graph import numpy as np from matplotlib import pyplot,mpl import matplotlib from scipy import interpolate from scipy import * def contains(thelist,name): # checks if the current list of categories contains the one just read for val in thelist: if val == name: return True return False def getCategories(lastFile): ''' must determine the colors to use I would like to make a gradient so that the better the task, the closer to blue bad labels will recieve colors closer to blue read the last file given for the information on how I feel the order should be then just keep them in the order of how good they are in the tracker use a color range and develop discrete values for each category by evenly spacing them out any time not found should assume to be sleep sleep should be white ''' tracker = open(lastFile+'.txt') # open the last file # find all the categories categories = [] for line in tracker: pos = line.find(':') # does it have a : or a ? if pos==-1: pos=line.find('?') if pos != -1: # ignore if no : or ? name = line[0:pos].strip() # split at the : or ? if contains(categories,name)==False: # if the category is new categories.append(name) # make a new one return categories # find good values in order of last day newlabels=[] for val in getCategories(lastDay): if contains(labels,val): newlabels.append(val) categories=newlabels # convert discrete colormap to listed colormap python for ii,val in enumerate(labels): if contains(categories,val)==False: labels[ii]='sleep' # create a figure fig = pyplot.figure() axes = [] for x in range(endTimes[-1]%(24*60)): ax = fig.add_axes([0.05, 0.65, 0.9, 0.15]) axes.append(ax) # figure out the colors to use # stole this function to make a discrete colormap # http://www.scipy.org/Cookbook/Matplotlib/ColormapTransformations def cmap_discretize(cmap, N): """Return a discrete colormap from the continuous colormap cmap. cmap: colormap instance, eg. cm.jet. N: Number of colors. Example x = resize(arange(100), (5,100)) djet = cmap_discretize(cm.jet, 5) imshow(x, cmap=djet) """ cdict = cmap._segmentdata.copy() # N colors colors_i = np.linspace(0,1.,N) # N+1 indices indices = np.linspace(0,1.,N+1) for key in ('red','green','blue'): # Find the N colors D = np.array(cdict[key]) I = interpolate.interp1d(D[:,0], D[:,1]) colors = I(colors_i) # Place these colors at the correct indices. A = zeros((N+1,3), float) A[:,0] = indices A[1:,1] = colors A[:-1,2] = colors # Create a tuple for the dictionary. L = [] for l in A: L.append(tuple(l)) cdict[key] = tuple(L) # Return colormap object. return matplotlib.colors.LinearSegmentedColormap('colormap',cdict,1024) # jet colormap goes from blue to red (good to bad) cmap = cmap_discretize(mpl.cm.jet, len(categories)) cmap.set_over('0.25') cmap.set_under('0.75') #norm = mpl.colors.Normalize(endTimes,cmap.N) print endTimes print labels # make a color list by matching labels to a picture #norm = mpl.colors.ListedColormap(colorList) cb1 = mpl.colorbar.ColorbarBase(axes[0],cmap=cmap ,orientation='horizontal' ,boundaries=endTimes ,ticks=endTimes ,spacing='proportional') pyplot.show()

    Read the article

  • Entity Framework associations killing performance

    - by Chris
    Here is the performance test i am looking at. I have 8 different entities that are table per type. Some of the entities contain over 100 thousand rows. This particular application does several recursive calculations on the client so I think it may be best to preload the data instead of lazy loading. If there are no associations I can load the entire database in about 3 seconds. As I add associations in any way the performance starts to drastically decline. I am loading all the data the same way (just calling toList() on the entity attached to the context). I ran the test with edmx generated classes and self tracking entities and had similar results. I am sure if I were to try and deal with the associations myself, similar to how I would in a dataset, the performance problem would go away. On the other hand I am pretty sure this is not how the entity framework was intended to being used. Any thoughts or ideas?

    Read the article

  • Combining Two Models in Rails for a Form

    - by matsko
    Hey Guys. I'm very new with rails and I've been building a CMS application backend. All is going well, but I would like to know if this is possible? Basically I have two models: @page { id, name, number } @extended_page { id, page_id, description, image } The idea is that there are bunch of pages but NOT ALL pages have extended_content. In the event that there is a page with extended content then I want to be able to have a form that allows for editing both of them. In the controller: @page = Page.find(params[:id]) @extended= Extended.find(:first, :conditions = ["page_id = ?",@page.id]) @combined = ... #merge the two somehow So in the view: <%- form_for @combined do |f| % <%= f.label :name % <%= f.text_field :name % ... <%= f.label :description % <%= f.text_field :description % <%- end This way in the controller, there only has to be one model that will be updated (which will update to both). Is this possible?

    Read the article

  • Efficient mass string search problem.

    - by Monomer
    The Problem: A large static list of strings is provided. A pattern string comprised of data and wildcard elements (* and ?). The idea is to return all the strings that match the pattern - simple enough. Current Solution: I'm currently using a linear approach of scanning the large list and globbing each entry against the pattern. My Question: Are there any suitable data structures that I can store the large list into such that the search's complexity is less than O(n)? Perhaps something akin to a suffix-trie? I've also considered using bi- and tri-grams in a hashtable, but the logic required in evaluating a match based on a merge of the list of words returned and the pattern is a nightmare, and I'm not convinced its the correct approach.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >