Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 758/2416 | < Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >

  • SQL Transactional Replication snapshot not applying

    - by dmch2
    Hi, I'm using SQL Transactional Replication with pull subscriptions to replicate databases (hosting their own distribution database) from several servers across a VPN to a central server. I've got the first 2 databases working fine but the 3rd one is causing me problems. My subscription server is SQL 2008, the source systems are all SQL 2005. The source databases are a few 100Mb in size and contain audit data so are simply growing slowly by adding new records at approx 1kb a second. As far as the replication monitor, Agent logs and event logs show everything is working fine - except that no data appears in my subscription database. The distribution agent doesn't seem to want to read the snapshot (and hence the initial state and schema) from the publisher. New transactions aren't applied although they do seem to be arriving OK as the replication monitor shows things like '5 transactions with 10 commands were delivered'. I would expect (as in previous times) to see statements about data being BCPed in the replication monitor. The snapshot is on the publisher on a shared folder. The subscriber can view the snapshot OK (\\repldata) and the alt snapshot folder is pointing at it. But the distribution agent doesn't seem to be making an attempt to do read it. I tried changing the snapshot path to something that's incorrect and didn't even get an error saying that it couldn't access it. After lots of googling etc I found that sp_MSget_repl_commands is called by the subscriber on the distribution database on the publisher. Running a profiler I can see that it's only called for one agent Id. After a reinit it's called for sequence number 0x0 as expected so I thought that would mean it's would look for the snapshot. However, looking on the publisher I see that there's data for two agents - the snapshot agent and the log reader agent (which is being queries). So I guess I need to tell the distribution agent to get the data for both. But how? and more importantly - why? It worked fine on the other two servers I've replicated. I'm not an SQL novice but this is pretty much my first go at replication so don't be afraid to accuse me of missing something obvious/stupid! I can get log files (eg from the distribution agent) if you want but they don't seem to have any errors in them - it just starts up and starts applying log reader agent changes. Cheers Dave

    Read the article

  • HP lance de nouveaux services de support pour optimiser les applications sur serveurs x86, meilleures performances et faibles coûts

    HP lance de nouveaux services de support afin d'optimiser les applications critiques sur serveurs HP x86 : HP Critical Advantage ; pour de meilleures performances et des coûts réduits HP annonce aujourd'hui un nouveau service pour son portfolio Mission Critical (son haut de gamme des solutions de support) : le HP Critical Advantage. Son but ? "Aider à gérer la complexité des matériels et logiciels des infrastructures de dernière génération, afin que les promesses des nouvelles technologies soient tenues". Cette solution se focalise sur les environnements hautement virtualisés sur x86, et elle permet de s'appuyer sur "les experts" des centres de support HP partout dans le monde (qui ont une vision ...

    Read the article

  • SQL Server 2008 Spatial Index Performance

    The institution I work with has decided to migrate their database system to SQL Server 2008. One of the applications uses geospatial data, which consists of millions of rows. I understand that their are indexes that can be used for geospatial data, but have not worked with them. What's the scoop on them?

    Read the article

  • New Interoperability Solutions for SQL Server 2012

    - by The Official Microsoft IIS Site
    I am excited to share some great news about how we are opening up the SQL Server data platform even further with expanded interoperability support through new tools that allow customers to modernize their infrastructure while maximizing existing investments and extending virtually any data anywhere. The SQL Server team today introduced several tools that enable interoperability with SQL Server 2012. These tools help developers to build secure, highly available and high performance applications for...(read more)

    Read the article

  • Easy QueryBuilder - A User-Friendly Ad-Hoc Advanced Search Solution

    Constructing an easy and powerful QueryBuilder interface becomes more important for complex data grid filtering and accurate reporting services. In this article, I'll discuss how to build a query search engine using ASP.NET AJAX and dynamic SQL. The main goal is to provide an interactive interface to allow users select query attributes, operators, attribute values, and T-SQL operators so that the data context query list can be easily composed and a search engine is invoked.

    Read the article

  • Get selected object from TreeView

    - by GoGoDo
    I've been working on a minor (first time) app with quickly and hit a hurdle - how do I get the selected row (the data) from a TreeView? The data to the TreeView is passed from a list of files in a directory, and I need to know which rows were selected (and thus which files were). What is the best way to do that? Here's the current code: self.treeview = self.builder.get_object("treeview") select = self.treeview.get_selection() select.connect("changed", self.on_tree_selection_changed) def on_tree_selection_changed(selection): model, treeiter = self.treeview.selection-get() if treeiter != None: print "You selected", model[treeiter][0]

    Read the article

  • Performance Tune IBM DB2 z/OS Applications using Resource Constraint Analysis

    For the DB2 for z/OS professional the two most common systems tuning scenarios are tuning a DB2 data sharing group or tuning a series of application SQL statements. The data sharing group environment can involve multiple hardware installations and many other cross-system features and functions such as coupling facilities and management policies. Resource constraint analysis is a useful tool in both situations.

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • Why doesn't Unity 3D work on my HD3000 integrated graphics?

    - by Zatsugami
    So I got my new laptop recently. HP Envy 15 with switchable graphics card. I'm using both windows and ubuntu, but for ubuntu I need just Intel HD3000 for better battery live. I've installed fglrx-updates and fglrx-amdcccle-updates. The drivers seems to work for my ATI card. The problem is, Intel HD3000 does not support Unity 3D. Why? My older Intel GMA 4500 did this. lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0126] (rev 09) 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Whistler [AMD Radeon HD 6600M Series] [1002:6741] (rev ff)

    Read the article

  • Qaulcomm présente la nouvelle gamme de processeurs Snapdragon S4, pouvant équiper les futures tablettes Windows 8

    Qaulcomm présente la nouvelle gamme de processeurs Snapdragon S4 Pouvant équiper les futures tablettes Windows 8 Qualcomm, le constructeur de puces pour smartphones et tablettes, vient d'annoncer la sortie de nouveaux modèles de processeurs de la famille Snapdragon. La gamme S4, la nouvelle génération des puces de haute performance avec optimisation 3G/4G pour les smartphones et tablettes haut de gamme dispose désormais de 8 nouveaux modèles : MSM8660A, MSM8260A, MSM8630, MSM8230, MSM8627, MSM8227, APQ8060A et APQ8030. Les nouvelles références S4 sont disponibles en simple coeur, double coeur et quadricoeur. Les puces quadricoeur seront basées sur une architecture A...

    Read the article

  • Caching issue with Centos forwarding DNS server

    - by Paddington
    I installed a Forwarding DNS server on Centos 5.10 and it is resolving addresses e.g google.com. When I stopped named (service named stop) and tried to dig (dig @localhost A google.com) there was a failure to resolve the address. I checked and see the caching daemon nscd is running. Does this mean the server is not caching at all? How can I get it to cache? named.conf options { // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; // Put files that named is allowed to write in the data/ directory: listen-on port 53 {127.0.0.1; 10.0.0.4;}; directory "/var/named"; // the default dump-file "/var/named/chroot/var/named/data/cache_dump.db"; statistics-file "/var/named/chroot/var/named/data/named_stats.txt"; memstatistics-file "/var/named/chroot/var/named/data/named_mem_stats.txt"; // allow-query {localhost; 192.168.0.0/24; 10.0.0.0/8;}; recursion yes; //allow-query { localhost; 10.0.0.0/8;}; allow-query { localhost; any; }; allow-query-cache { localhost; any; }; forward only; forwarders {8.8.8.8; 8.8.4.4;}; dnssec-enable yes; // dnssec-lookaside auto; /* Path to ISC DLV key */ // bindkeys-file "/etc/named.iscdlv.key"; // managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; **

    Read the article

  • Oracle Exadata X3 Launch Webcast

    - by Cinzia Mascanzoni
    Available on-demand, this webcast covers everything your partners need to know about Oracle’s next-generation database machine. They will learn how to improve performance by storing multiple databases in memory, lower power and cooling costs by 30%, and easily deploy a cloud-based database service. Exadata X3 combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Partners won’t want to miss this webcast. Invite them to watch today! View and share the replay.

    Read the article

  • When is a glue or management class doing too much?

    - by jprete
    I'm prone to building centralized classes that manage the other classes in my designs. It doesn't store everything itself, but most data requests would go to the "manager" first. While looking at an answer to this question I noticed the term "God Object". Wikipedia lists it as an antipattern, understandably. Where is the line between a legitimate glue class, or module, that passes data and messages from place to place, and a class that is doing too much?

    Read the article

  • We’re looking got SQL People

    - by simonsabin
    We are growing our data team at Wonga. If you are working in the SQL Server space and would like to join the one the fastest growing tech companies in Europe then please get in touch ( http://sqlblogcasts.com/blogs/simons/contact.aspx ) We have positions for production DBAs, Data QA analysts and SQL generalists (with a BI tendency). We also have generalist production support roles   Wonga is currently 3rd in the Times Tech Track 100 having been 1st last year. Being in the top 3 for two years...(read more)

    Read the article

  • Using ASCMD to run command line scripts for SQL Server Analysis Services

    Sometimes it would be helpful to run scripts from a command line for Analysis Services. This would be useful for things like creating backups, processing data or running other tasks. Is there a command line tool like sqlcmd for multidimensional databases and Data Mining? What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • SSIS Basics: Using the Merge Join Transformation

    SSIS is able to take sorted data from more than one OLE DB data source and merge them into one table which can then be sent to an OLE DB destination. This 'Merge Join' transformation works in a similar way to a SQL join by specifying a 'join key' relationship. this transformation can save a great deal of processing on the destination. Annette Allen, as usual, gives clear guidance on how to do it.

    Read the article

  • Dell annonce une tablette 10 pouces sous Windows 7 à destination des entreprises pour fin 2011

    Dell annonce la construction d'une tablette 10 pouces sous Windows 7 pour les entreprises Dell va lancer une tablette de 10 pouces sous Windows 7 au plus tard en fin de cette année. Le constructeur lors d'une conférence de presse à San Francisco a annoncé une actualisation majeure de ses PC, ordinateur portable et stations de travail et la sortie d'une tablette Windows 7 principalement destinée aux marchés verticaux comme l'éducation, l'aviation, les finances? La firme lors de cet événement a présenté juste la maquette de sa future tablette baptisée « Windows 7 Business Tablet » tournant sur Windows 7 et utilisant sa prochaine génération de processeurs Intel Atom, connue sous...

    Read the article

  • WPF Wonders: Using DataTemplates

    WPF data templates let you determine the pieces a repetitive control uses to display its items. Learn some unique and interesting ways to use data templates for displaying the items in ListBoxes, ComboBoxes, and ListViews.

    Read the article

  • XNA VertexBuffer.SetData performance suggestions

    - by CodeSpeaker
    I have a 3d world in a grid layout where each grid cell contains its separate vertex and index buffer for the mesh/terrain of that cell. When the player moves outside the boundaries of his cell, i dynamically load more cells in his walking direction based on his viewing distance. This triggers x number of vertex and indexbuffer initializations depending on how many cells that needs to be generated and causes the framerate to drop annoyingly during this time. The generation of terrain data is handled in a separate thread and runs smoothly. The vertex and index buffers are added during the update cycle of the game loop. I´ve tried batching the number of cells to be processed to avoid sending too much data at once into the buffers, which worked ok at a shorter viewing distance (about 9 cells to process), but not as well at greater distances with around 30 cells to process. Any idea how i can optimize this?

    Read the article

  • Liquid XML 2012 Service Pack 1 available

    - by bconlon
    Liquid XML Editor is one of my favourite tools, but I was slightly concerned with the original 2012 release as the new XML Data Mapper tool was a bit buggy. So I was pleased to see SP1 is now available for download.Sure enough the issues have been fixed and it's once more a great tool!The data mapper can also now be run from the command line (this was a little limiting before as you had to open the IDE to run the mapping) and the Help now contains full documentation.#

    Read the article

  • SAP courtise les petites structures et les startups, l'éditeur veut faire savoir que ses solutions ne s'adressent pas qu'aux "gros"

    SAP courtise les petites structures et les start-ups Et veut faire savoir que ses solutions ne s'adressent pas qu'aux multinationales HANA n'est pas réservé aux multinationales. SAP a mis les petits plats dans les grands pour le faire savoir lors d'un « concours » de start-ups qui souhaitent utiliser la technologie de In-Memory Computing de l'éditeur allemand pour accélérer de manière drastique le traitement de leurs données ? et donc de leurs temps de calcul. Cette journée parisienne du « SAP Startup Forum » a vu défiler des entreprises très diverses. Certaines ont été lancées depuis plusieurs années (comme Kxen, qui crée des outils de « deuxième génération d'analyse pré...

    Read the article

  • Shared Datasets in SQL Server 2008 R2

    This article leverages the examples and concepts explained in the Part I through Part IV of the spatial data series which develops a "BI-Satellite" app. Overview In the spatial data series we ... [Read Full Article]

    Read the article

  • One man software developer product success stories? [on hold]

    - by EugeneKr
    I've got a bad feeling that this question is not appropriate here.. Hopefully you can point me to the right place to ask such a thing (not google though, been there). I want to create my own product, but for some reason have no ideas, so decided to see what people have already done. I would like to start by myself too. I don't mind expanding, but at later stages when it is absolutely necessary. Anyway, to give you an example. There is a guy who created bingo card generation software, then somebody made a wedding planner software and they seem to be doing pretty fine. I would like to know more such cases to draw inspiration from. Do you know such people or maybe you are one of them? Also, if there are places on the net where they dwell, don't hesitate to tell me :) Thanks!

    Read the article

  • State Transition Constraints

    Data Validation in a database is a lot more complex than seeing if a string parameter really is an integer. A commercial world is full of complex rules for sequences of procedures, of fixed or variable lifespans, Warranties, commercial offers and bids. All this requires considerable subtlety to prevent bad data getting in, and if it does, locating and fixing the problem. Joe Celko shows how useful a State transition graph can be, and how essential it can become with the time aspect added.

    Read the article

< Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >