Search Results

Search found 6110 results on 245 pages for 'graph databases'.

Page 164/245 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • GNUPlot: plot different ranges with different styles

    - by Mr. Shickadance
    Hello all, I know this should be pretty simple, but I haven't been able to find a similar example. I need to plot different ranges of a datafile differently, but on the same graph. For instance, say my datafile represents a function with x and y values. I want to plot the first N values using a style like lines, and then the next M values using a different style, like points. I was thinking I would need a plot command similar to this: plot [1:5] "my.data" using 1:2 with lines, [6:12] using 1:2 with points, [13:19] using 1:2 with lines Essentially I want to distinguish different areas of the functions. Any ideas? I am sorry if it sounds like I'm rambling but I am quite stumped. Thanks in advance!

    Read the article

  • ant support for dynamic target

    - by Li He
    I previous saw some similar questions on stackoverflow but didn't see any solution. I guess the answer could be impossible and I am trying to see who can provide me this confirmation. AFAIK, an ant project contains several targets and each target may have several tasks. There is an task MacroDef that defines a sequential of `things' (tasks I suppose?). I tried to put target inside this block but ant complains the name of the target is missing (I am using the attribute of the macrodef to generate the name of the target). So it could be a dead end. Then I found that by using a task `script', we have access to the Project and can even call addTarget/AddOrReplaceTarget from there. But it seems that the targets I create there have no impact on the running targets. Does that mean ant doesn't support manipulating dependencies at target runtime? Is there any way to generate these targets before ant start building the dependency graph?

    Read the article

  • Graphiz: how to set 'default' arrow style?

    - by sdaau
    Hi all, Consider this dot language code: digraph graphname { subgraph clusterA { node [shape=plaintext,style=filled]; 1 -> 2 [arrowhead=normal,arrowtail=dot]; 2 -> 3 -> X2 -> 5; 6; 7; label = "A"; color=blue } } In the above example, only the 1 -> 2 connection will have the arrowhead=normal,arrowtail=dot style applied; all the other arrows will be of the "default" style. My question is - how do I set the arrow style (for the entire subgraph - or for the entire graph), without having to copy paste "[arrowhead=normal,arrowtail=dot];" next to each edge connection? Thanks in advance, Cheers!

    Read the article

  • How to get balanced diagrams from graphviz?

    - by user360872
    Is there a setting in graphviz to generate balanced diagrams like this: When diagram is more complex like below - it isn't balanced like that above (4 is below **). Code to generate second diagram: graph { n1 [label="+"]; n1 -- n2; n2 [label="/"]; n2 -- n3; n3 [label="*"]; n3 -- n4; n4 [label="1"]; n3 -- n5; n5 [label="2"]; n2 -- n6; n6 [label="3"]; n1 -- n7; n7 [label="**"]; n7 -- n8; n8 [label="4"]; n7 -- n9; n9 [label="5"]; }

    Read the article

  • Export Import error 'SSIS Data Flow Task could not be created' ... registering DTSPipeline.dll, cannot create task "STOCK:PipelineTask"

    - by Moin Zaman
    I'm about to throw in the towel on this one. Running SQL Server 2008 enterprise on Windows 7 x64. Can't get past this issue. When I try to Import / Export Data from databases through SQL Server Management Studio I get the following Error. Error: TITLE: SQL Server Import and Export Wizard ------------------------------ The SSIS Data Flow Task could not be created. Verify that DTSPipeline.dll is available and registered. The wizard cannot continue and it will terminate. ------------------------------ ADDITIONAL INFORMATION: Cannot create a task with the name "STOCK:PipelineTask". Verify that the name is correct. ({0194F10C-9860-4A4F-AF8B-DE7EFD89859F}) I have tried many solutions found via Google, but none of them have worked. A side issue that may be related is when I try to create an Integration Services Project in Business Intelligence Studio I get a 'project creation failed' error.

    Read the article

  • PHP and MySQL related Problem

    - by Tareq
    Hi friends, I have a local LAN to my office. Recently I designed a New Software system using PHP, MySQL for my office. My boss wants to see the reports from online. My problem is, my network connection is often failed to my office. But I have to input all time. So, now I want to use two instances of my software. One will be using the LAN and one will be uploaded to my server. My question is, how can I easily keep the both databases up-to-date always? Please help me with this issue. If you want more info please feel free to ask me.

    Read the article

  • parsing through html with php

    - by salmane
    while working on facebook connect I have to retrieve an access token from a url ( it is not in the url itself but in the file lined to that url) so this is what I do $url = "https://graph.facebook.com/oauth/access_token?client_id=".$facebook_app_id."&redirect_uri=http://www.example.com/facebook/oauth/&client_secret=".$facebook_secret."&code=".$code;" function get_string_between($string, $start, $end){ $string = " ".$string; $ini = strpos($string,$start); if ($ini == 0) return ""; $ini += strlen($start); $len = strpos($string,$end,$ini) - $ini; return substr($string,$ini,$len); } $access_token = get_string_between(file_get_contents($url), "access_token=", "&expires="); it looks ugly and clumsy is there a better way to do it ? thank you .

    Read the article

  • how to include error messages into backup reports for SQL Server 2008 R2?

    - by avs099
    Right now I have daily (differential) and weekly (full) backups set on my SQL Server 2008 R2 as jobs for SQL Server Agent with email notifications if job fails. I do get emails like this: JOB RUN: 'Daily backup.Diff backup' was run on 4/11/2012 at 3:00:00 AM DURATION: 0 hours, 0 minutes, 28 seconds STATUS: Failed MESSAGES: The job failed. The Job was invoked by Schedule 9 (Daily backup.Diff backup). The last step to run was step 1 (Diff backup). but often that happens because we delete/create new databases - and diff backup fails. And the only way for me to see the actual reason is to go to Log Viewer - Maintenance Plans logs. Is it possible to include "Error Message" field from the logs into notification emails? And more generic - is it possible to change notification email templates somehow?.. Thanks you.

    Read the article

  • mysqld stopped working..can't restart...need help?

    - by grant tailor
    i was just checking somethings and noticed mysqld is not running in parallels power panel control panel...but my websites on the server were all working fine, which use mysql databases...so really strange So i tried to restart mysqld but got errors and can't restart and now all my websites are all offline now saying error connecting to database. logged in as root and tried /etc/init.d/mysqld start and got this error ERROR! Manager of pid-file quit without updating file What do i do next? What do i do? Please help!

    Read the article

  • Framework/tool for processing C++ unit tests with numerical output

    - by David Claridge
    Hi, I am working on a C++ application that uses computer vision techniques to identify various types of objects in a sequence of images. The (1000+) images have been hand-classified, so we have an XML file for each image containing a description of where the objects are actually located in the images. I would like to know if there is a testing framework that can understand/graph results from tests that are numeric, in this case some measure of the error in the program's classification of the images, rather than just pass/fail style unit tests. We would like to use something like CDash/CTest for running these automated tests, and viewing over time how improvements to the vision algorithms are causing the images to be more correctly classified. Does anyone know of a tool/framework that can do this?

    Read the article

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • SQL database testing: How to capture state of my database for rollback.

    - by Rising Star
    I have a SQL server (MS SQL 2005) in my development environment. I have a suite of unit tests for some .net code that will connect to the database and perform some operations. If the code under test works correctly, then the database should be in the same (or similar) state to how it was before the tests. However, I would like to be able to roll back the database to its state from before the tests run. One way of doing this would be to programmatically use transactions to roll back each test operation, but this is difficult and cumbersome to program; it could easily lead to errors in the test code. I would like to be able to run my tests confidently knowing that if they destroy my tables, I can quickly restore them? What is a good way to save a snapshot of one of my databases with its tables so that I can easily restore the database to it's state from before the test?

    Read the article

  • IIS 7 problem which does not occur under apache

    - by cc0
    I'm hosting a little site using a JavaScript to draw a simple graph. It involves one html index file, some css and some js files. It has all been working perfectly on two different apache servers, but when I set up IIS 7 the ajax calls fail. I get no java debug errors in firefox that I can work with, or any kind of error message at all. Without going into the code itself, does anyone have a similar experience with IIS? This is the first time I'm using IIS so I'm not quite sure what to expect to have trouble with. I'd love some input on this, if I have to delve into the code itself I'll make a new thread, I just thought I'd see if this could be a typical issue. Any help is appreciated!

    Read the article

  • WMI Notfication and database mirroring

    - by user22215
    Hi all I'm having a problem configuring a WMI alert that I would like to use with database mirroring. I'm running on Windows 2008 Enterprise X64 with Server 2008 Enterprise X64 also SQL Server has SP1 installed. Basically I click on alert select WMI after that I typed the below SQL statement SELECT * FROM DATABASE_MIRRORING_STATE_CHANGE WHERE DatabaseName = 'testmove' AND State = 8 I have also made sure the service broker is enabled for the msdb and all mirrored databases however I still can't get this to work basically the alert never fires. I'm testing with just the alert functionality I have not even added in the agent job yet. I tested this by right clicking on my mirrored database and forcing it to fail over. Any help with this problem would be much appreciated

    Read the article

  • d3 tree - parents having same children

    - by Larry Anderson
    I've been transitioning my code from JIT to D3, and working with the tree layout. I've replicated http://mbostock.github.com/d3/talk/20111018/tree.html with my tree data, but I wanted to do a little more. In my case I will need to create child nodes that merge back to form a parent at a lower level, which I realize is more of a directed graph structure, but would like the tree to accomodate (i.e. notice that common id's between child nodes should merge). So basically a tree that divides like normal on the way from parents to children, but then also has the ability to bring those children nodes together to be parents (sort of an incestual relationship or something :)). Asks something similar - How to layout a non-tree hierarchy with D3 It sounds like I might be able to use hierarchical edge bundling in conjunction with the tree hierarchy layout, but I haven't seen that done. I might be a little off with that though.

    Read the article

  • How do I get an Iterator over a vector of objects from a Template?

    - by nieldw
    I'm busy implementing a Graph ADT in C++. I have templates for the Edges and the Vertices. At each Vertex I have a vector containing pointers to the Edges that are incident to it. Now I'm trying to get an iterator over those edges. These are the lines of code: vector<Edge<edgeDecor, vertexDecor, dir>*> edges = this->incidentEdges(); vector<Edge<edgeDecor, vertexDecor, dir>*>::const_iterator i; for (i = edges.begin(); i != edges.end(); ++i) { However, the compiler won't accept the middle line. I'm pretty new to C++. Am I missing something? Why can't I declare an iterator over objects from the Edge template? The compiler isn't giving any useful feedback. Much thanks niel

    Read the article

  • How to do a cost-benefit analysis for platform-level features?

    - by Callister Park
    I work on a development team that works closely with Product Managers. There is mutual agreement between the developers and Product Managers that there should be a business case behind every feature the development team builds. My question is, what is an effective way to make a business case for platform-level features that have higher up front cost but will provide long term benefits? For example, the development team would like to implement a plug-in framework. There is the higher up-front cost to implement a plug-in framework but delivering the subsequent features as plug-ins will be cheaper in the long run. This is obvious to everyone including the Product Managers. Is there a standard/simple way to express the cost-benefits? Is there a simple way to visualize it with a graph?

    Read the article

  • Unknown error when trying to get long lived access token

    - by Marius.Radvan
    I am trying to get a long lived access token for one of my pages, using this code: $page_info = $facebook->api("/page-id?fields=access_token"); $access_token = array( "client_id" => $facebook->getAppId(), "client_secret" => $facebook->getAppSecret(), "grant_type" => "fb_exchange_token", "fb_exchange_token" => $page_info["access_token"] ); $result = $facebook->api("/oauth/access_token", $access_token); echo json_encode($result); ... but I get this response: {"error_code":1,"error_msg":"An unknown error occurred"} I get the same response if I browse to https://graph.facebook.com/oauth/access_token? client_id=APP_ID& client_secret=APP_SECRET& grant_type=fb_exchange_token& fb_exchange_token=EXISTING_ACCESS_TOKEN as stated in https://developers.facebook.com/roadmap/offline-access-removal/#page_access_token

    Read the article

  • Logging out of Facebook invalidates offline_access token

    - by Mike Pateras
    I'm getting an offline access token like this: https://graph.facebook.com/oauth/access_token?scope=offline_access&client_id=MYCLIENTID&redirect_uri=MYREDIRECTURI&client_secret=MYSECRET&code=MYCODE obviously the MYCLIENTID and stuff have been changed for the sake of this post. Anyway, as soon as the user logs out of facebook, the key seems to no longer be valid. Am I not requesting offline_access properly (there's still an "expires" value on it, should there be if it is actually getting offline access), or is that just how it works? If it's the latter, how can I get a key that will persist, regardless of if the user logs out of facebook? I'm sure this is possible, because Tweetdeck can still write to Facebook, even though I'm currently logged out.

    Read the article

  • Mimicking Google's Persistent Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistent disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Get a list of events owned by a facebook page

    - by Tom Wright
    Does anyone know how I can get a list of events owned (created) by a Facebook page? I seem to be able to use the "graph api" to generate a list of the events an entity is attending. I also looked at FQL, but it seems to require that the 'where' clause is an indexable field (and, naturally, the id is the only indexable field). For bonus points, we'd be able to do this without any authentication. (Though I'm resigned to the fact that I'm likely going to need at least a permanent access_token.) If anyone know how to do this I'd be eternally grateful.

    Read the article

  • Which SQL Server edition?

    - by StaringSkyward
    We need a new install of windows server and sql server to replicate a couple of databases to a geographically separate location from an existing application (over a site-to-site VPN). The source database is SQL Server 2005. However, this is a temporary solution since the client is aiming to implement a different system entirely, so we are looking to find the minimum specification of both windows server and sql server to do this. We are finding the SQL server features per edition and licensing a little difficult to understand, hence the question. Am I correct in thinking that we can replicate data using transactional replication from SQL Server 2005 to 2008 web edition and we can install sql server web edition on windows 2008 web edition also? Thanks.

    Read the article

  • PostgreSQL, update existing rows with pg_restore

    - by woky
    Hello. I need to sync two PostgreSQL databases (some tables from development db to production db) sometimes. So I came up with this script: [...] pg_dump -a -F tar -t table1 -t table2 -U user1 dbname1 | \ pg_restore -a -U user2 -d dbname2 [...] The problem is that this works just for newly added rows. When I edit non-PK column I get constraint error and row isn't updated. For each dumped row I need to check if it exists in destination database (by PK) and if so delete it before INSERT/COPY. Thanks for your advice. (Previously posted on stackoverflow.com, but IMHO this is better place for this question).

    Read the article

  • Problems with the backup

    - by marcodv
    I did a script which run around 4 o'clock in the morning, for backup all the mysql databases and the config file for 250 linux vm. The problem is that it tooks ages for complete and more than 50% of these vm, need more than 8 hours for complete. More or less all the vm had the same configuration,I mean Same amount of ram same amount of disk space same number of cpu Debian 6.0.5 I am saving these backup on amazon s3, because is the cheapest solutions that I've found. Now my questions is: Has anyone some solutions or suggestions about that? On one blog I've read that probably the ionice and nice combination could be good work around about that. any thought?

    Read the article

  • In place SQL 2008 upgrade vs. Side by side?

    - by Jim
    I have a SQL 2005 Std edition server with 5 databases in production, 4 db's are used by web-based apps the 5th is a desktop application. My question is should I perform an in-place upgrade or a side-by-side by creating an sql2008 instance on the same box? The machine is a VM on vmware and I'm planning on taking a snapshot before the upgrade and having a 'blackout' window during the upgrade so that I could roll back to the snapshot if things go really bad. Any previous experience and advice is appreciated.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >