Search Results

Search found 21589 results on 864 pages for 'primary key'.

Page 395/864 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • SQL Server: Clustering by timestamp; pros/cons

    - by Ian Boyd
    I have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means I want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But I can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. I could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic I want for a candidate cluster key. So I cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, I use what I already have. What I'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • SBS 2011 on different subnet than domain computers

    - by Ravi
    The setup is as follows: SBS 2011 in datacentre on subnet A Domain PCs at another location on subnet B There is a site-to-site VPN. The domain PCs have joined the domain and have the SBS as their primary DNS server. The domain PCs can ping the DC but the problem is that the DC cannot ping any of the remote subnet (subnet B) SBS --Switch -- Router A ------------------- Router B -- Switch -- Domain PCs What is strange is that router A can ping any host on the subnet B. Another host on Subnet A can also ping any host on subnet B. It's only the DC which cannot ping anything to that specific remote subnet B. I did a tracert from the SBS to router B. The packet reaches Router A from the SBS but then it fails. Am I missing some specific settings that needs to be done when SBS is on a different subnet than its member pcs ?

    Read the article

  • Moving MS Exchange 2007 to another machine

    - by Mustafa Ismail Mustafa
    We have a machine that has been chugging along with the burden of both Exchange and DC and DNS all with SBS 2008. We have a better machine now and I'd like to move Exchange 2007 to that machine and take it off of this machine. In fact, I'm planning on formatting the old machine and get rid of SBS all together because it is making the machine SLOW. How would I go about making the move? I've read on previous versions of Exchange (2000), that all you do is install Exchange on the new machine and then start moving mailboxes one after the other. Well, what about all the different rules we have in place? How do those get moved? How do we de-commission the old exchange and set up the new exchange as the primary one? Come to think of it, how do we have both exchanges recognize each other on the same domain? TIA

    Read the article

  • SQL Server: Clutering by timestamp; pros/cons

    - by Ian Boyd
    i have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means i want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But i can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. i could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic i want for a candidate cluster key. So i cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, i use what i already have. What i'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • Examples of monoids/semigroups in programming

    - by jkff
    It is well-known that monoids are stunningly ubiquitous in programing. They are so ubiquitous and so useful that I, as a 'hobby project', am working on a system that is completely based on their properties (distributed data aggregation). To make the system useful I need useful monoids :) I already know of these: Numeric or matrix sum Numeric or matrix product Minimum or maximum under a total order with a top or bottom element (more generally, join or meet in a bounded lattice, or even more generally, product or coproduct in a category) Set union Map union where conflicting values are joined using a monoid Intersection of subsets of a finite set (or just set intersection if we speak about semigroups) Intersection of maps with a bounded key domain (same here) Merge of sorted sequences, perhaps with joining key-equal values in a different monoid/semigroup Bounded merge of sorted lists (same as above, but we take the top N of the result) Cartesian product of two monoids or semigroups List concatenation Endomorphism composition. Now, let us define a quasi-property of an operation as a property that holds up to an equivalence relation. For example, list concatenation is quasi-commutative if we consider lists of equal length or with identical contents up to permutation to be equivalent. Here are some quasi-monoids and quasi-commutative monoids and semigroups: Any (a+b = a or b, if we consider all elements of the carrier set to be equivalent) Any satisfying predicate (a+b = the one of a and b that is non-null and satisfies some predicate P, if none does then null; if we consider all elements satisfying P equivalent) Bounded mixture of random samples (xs+ys = a random sample of size N from the concatenation of xs and ys; if we consider any two samples with the same distribution as the whole dataset to be equivalent) Bounded mixture of weighted random samples Which others do exist?

    Read the article

  • Converter problem with XmlDataProvider

    - by Andrew
    Sorry for this, I've just started programming with wpf. I can't seem to figure out why the following xaml displays "System.Xml.XmlElement" instead of the actual xml node content. This is displayed 5 times in the listbox whenever I run it. Not sure where I'm going wrong... <Window x:Class="TestBinding.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Window.Resources> <XmlDataProvider x:Key="myXmlSource" XPath="/root"> <x:XData> <root xmlns=""> <name>Steve</name> <name>Arthur</name> <name>Sidney</name> <name>Billy</name> <name>Steven</name> </root> </x:XData> </XmlDataProvider> <DataTemplate x:Key="shmooga"> <TextBlock Text="{Binding}"/> </DataTemplate> </Window.Resources> <Grid> <ListBox ItemTemplate="{StaticResource shmooga}" ItemsSource="{Binding Source={StaticResource myXmlSource}, XPath=name}"> </ListBox> </Grid> </Window> Any help would be very much appreciated. Thanks!

    Read the article

  • Modern monitor technologies - need to find a new monitor

    - by Michal Minicki
    I'm preparing to change my old LCD monitor for a new one. I have an old NEC 20WGX2 Pro based on an IPS panel. I'm looking for a screen that gives good color output but is very good at gaming at the same time (since it is its primary service). I tend to switch monitors between my different computers at home so it has to be multi purpose, hence IPS technology before. Now, where can I read on newest monitor technologies so I can make an informed decision? I need to find a best fit for myself and I have a very outdated knowledge at the moment. So any hints are greatly appreciated, be it info on technologies, web sources, links to other questions, etc.

    Read the article

  • How to redirect (or Alias) jump page with Apache

    - by Meltemi
    I'm not an Apache expert but need to make a small change to a web server. We are introducing a "jump page" URL that is different from a primary URL (for tracking reasons). /productA/index.html /productA/jump_index.html Basically i want to log that jump_index.html was requested and then return index.html. I don't want the client to wait 8 seconds or so for a redirect. How should we be handling this? Simply symlink (or alias) the file in the filesystem? Use mod_alias Alias Match (if so how exactly)? something better still? Edit: mod_rewrite in httpd.conf: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^TRACE RewriteRule .* - [F] </IfModule>

    Read the article

  • Fault tolerance with a pair of tightly coupled services

    - by cogitor
    I have two tightly coupled services that can run on completely different nodes (e.g. ServiceA and ServiceB). If I start up another replicated copy of both these services for backup purposes (ServiceA-2 and ServiceB-2), what would be the best way of setting up a fault tolerant distributed system such that on a fault in any of the tightly coupled services ServiceA or ServiceB the whole communication should go through backup ServiceA-2 and ServiceB-2? Overall, all the communication should go either through both services or their backup replicas. |---- Service A | | Service B | | (backup branch - used only on fault in Service A or B) ---- Service A-2 | Service B-2 Note that in case that Service A goes down, data from Service B would be incorrect (and vice versa). Load balancing between the primary and backup branch is also not feasible.

    Read the article

  • MySQL: order by and limit gives wrong result

    - by Larry K
    MySQL ver 5.1.26 I'm getting the wrong result with a select that has where, order by and limit clauses. It's only a problem when the order by uses the id column. I saw the MySQL manual for LIMIT Optimization My guess from reading the manual is that there is some problem with the index on the primary key, id. But I don't know where I should go from here... Question: what should I do to best solve the problem? Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.00 sec) WRONG result when limit added! Should be the first row, id - 1336 mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 1 row in set (0.00 sec) Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.01 sec) Works correctly with limit: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | +------+---------------------+ 1 row in set (0.01 sec) Additional info: explain SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | 1 | SIMPLE | billing_invoices | range | index_billing_invoices_on_account_id | index_billing_invoices_on_account_id | 4 | NULL | 3 | Using where | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+

    Read the article

  • How can I create object in abstract class without having knowledge of implementation?

    - by Greg
    Hi, Is there a way to implement the "CreateNode" method in my library abstract below? Or can this only be done in client code outside the library? I current get the error "Cannot create an instance of the abstract class or interface 'ToplogyLibrary.AbstractNode" public abstract class AbstractTopology<T> { // Properties public Dictionary<T, AbstractNode<T>> Nodes { get; private set; } public List<AbstractRelationship<T>> Relationships { get; private set; } // Constructors protected AbstractTopology() { Nodes = new Dictionary<T, AbstractNode<T>>(); } // Methods public AbstractNode<T> CreateNode() { var node = new AbstractNode<T>(); // ** Does not work ** Nodes.Add(node.Key, node); } } } public abstract class AbstractNode<T> { public T Key { get; set; } } public abstract class AbstractRelationship<T> { public AbstractNode<T> Parent { get; set; } public AbstractNode<T> Child { get; set; } }

    Read the article

  • How to remove a model object from an EMF model and its GEF Editor via Adapter

    - by s.d
    This question is principally a follow-up to my question about EMF listening mechanisms. So, I have a third-party EMF model (uneditable) which is based on a generic graph model. The structure is as follows: Project | ItemGraph | Item | Document | DocumentGraph / | \ Tokens Nodes Relations(Edges) I have a GEF editor which works on the DocumentGraph (i.e., not the root object, perhaps this is a problem?): getGraphicalViewer().setContents(documentGraph). THe editor has the following edit part structure: DocumentGraphEP / \ Primary Connection LayerEP LayerEP / \ | TokenEP NodeEP RelationEP PrimaryLayerEP and ConnectionLayerEP both have simple Strings as model, which are not represented in the EMF (domain) model. They are simply used to add a primary (i.e., node) layer, and a connection layer (with ShortestPathConnectionRouter) to the editor. Problem: I am trying to get myself into the workings of EMF adapters, and have tried to make use of the available tutorials, mainly the EMF-GEF Eclipse tutorial, vainolo's blog, and vogella's tutorial. I thought I'd start with an easy thing, so tried to remove a node from the graph and see if I get it to work. Which I didn't, and I don't see where the problem is. I can select a node, and have the generic delete Action in my toolbar, but when I click it, nothing happens. Here is the respective source code for the different responsible parts. Please be so kind to point me to any errors (of thinking, coding errors, whathaveyou) you can find. NodeEditPart public class NodeEditPart extends AbstractGraphicalEditPart implements Adapter { protected IFigure createFigure() { return new NodeFigure(); } protected void createEditPolicies() { .... installEditPolicy(EditPolicy.COMPONENT_ROLE, new NodeComponentEditPolicy()); } protected void refreshVisuals() { NodeFigure figure = (NodeFigure) getFigure(); SNode model = (SNode) getModel(); PrimaryLayerEditPart parent = (PrimaryLayerEditPart) getParent(); // Set text figure.getLabel().setText(model.getSName()); .... } public void activate() { if (isActive()) return; // start listening for changes in the model ((Notifier)getModel()).eAdapters().add(this); super.activate(); } public void deactivate() { if (!isActive()) return; // stop listening for changes in the model ((Notifier)getModel()).eAdapters().remove(this); super.deactivate(); } private Notifier getSDocumentGraph() { return ((SNode)getModel()).getSDocumentGraph(); } @Override public void notifyChanged(Notification notification) { int type = notification.getEventType(); switch( type ) { case Notification.ADD: case Notification.ADD_MANY: case Notification.REMOVE: case Notification.REMOVE_MANY: refreshChildren(); break; case Notification.SET: refreshVisuals(); break; } } @Override public Notifier getTarget() { return target; } @Override public void setTarget(Notifier newTarget) { this.target = newTarget; } @Override public boolean isAdapterForType(Object type) { return type.equals(getModel().getClass()); } } NodeComponentEditPolicy public class NodeComponentEditPolicy extends ComponentEditPolicy { public NodeComponentEditPolicy() { super(); } protected Command createDeleteCommand(GroupRequest deleteRequest) { DeleteNodeCommand cmd = new DeleteNodeCommand(); cmd.setSNode((SNode) getHost().getModel()); return cmd; } } DeleteNodeCommand public class DeleteNodeCommand extends Command { private SNode node; private SDocumentGraph graph; @Override public void execute() { node.setSDocumentGraph(null); } @Override public void undo() { node.setSDocumentGraph(graph); } public void setSNode(SNode node) { this.node = node; this.graph = node.getSDocumentGraph(); } } All seems to work fine: When a node is selected in the editor, the delete symbol is activated in the toolbar, but when it is clicked, nothing happens in the editor. I'd be very thankful for any pointers :).

    Read the article

  • is there any Open Soruce solution for Failover of incoming Traffic?

    - by sahil
    Hi, We have two ISP... and both ISP's Ip Nat with same Webserver IP, i want failover for incoming traffic , is there any open source solution? can i do it by making two name server , one for each ISP? ... I am not sure but as per my knowledge primary and secondary name server will reply in round robin method till they are live , once any name server will be unreachable then only another will be reply...so if i am right then i think i can do incoming failover by making two name server in my office... Waiting for your valuable response... Thanking you, Sahil

    Read the article

  • wpf error template - red box still visible on collapse of an expander

    - by Andy Clarke
    Hi, I'm doing some validation on the DataSource of TextBox that's within an Expander and have found that once a validation error has been triggered, if I collapse the Expander, the red box stays where the TextBox would have been. <Expander Header="Blah Blah Blah"> <TextBox Name="TextBox" Validation.ErrorTemplate="{DynamicResource TextBoxErrorTemplate}" Text="{Binding Path=Blah, UpdateSourceTrigger=PropertyChanged, ValidatesOnDataErrors=True}" /> </Expander> I've tried to get round this by binding the visibility of the Error Template to the Expander, however I think there's something wrong with the binding. <local:NotVisibleConverter x:Key="NotVisibleConverter" /> <ControlTemplate x:Key="TextBoxErrorTemplate"> <DockPanel> <Border BorderBrush="Red" BorderThickness="2" Visibility="{Binding Path=IsExpanded, Converter={StaticResource NotVisibleConverter}, RelativeSource={RelativeSource AncestorType=Expander}}" > <AdornedElementPlaceholder Name="MyAdorner" /> </Border> </DockPanel> <ControlTemplate.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> I guess I've gone wrong with my binding, can someone put me back on track please? Alternatively does anyone know another solution to the ErrorTemplate still being visible on the collapse of an Expander?

    Read the article

  • CentOS - dual boot from new partition

    - by Dima
    I need to install two copies of the CentOS 5.5 (bank A and bank B) on different partitions of the same hard disk and install grub boot loader to another partition (visible from both banks). The boot loader should redirect the boot menu to bank A or bank B (according to the configuration). The new partition is mounted to /common_partition and grub is installed on it using following command: grub-install /dev/hda In the new partition I'm created the following menu.lst file: title BOOTCONTROL REDIRECT : PLEASE WAIT root (hd0,1) configfile /boot/menu.lst boot On my setup: both partitions (bank A and bank B) are primary and grub is installed on MBR. The problem is: but the new boot loader (on common_partition) did not load. What wrong on my configuration?

    Read the article

  • DesignTime data not showing in Blend when bound against CollectionViewSource

    - by bitbonk
    I have datatemplate for a viewmodel where an itemscontrol is bound again a CollectionViewSource (to enable sorting in xaml). <DataTemplate x:Key="equipmentDataTemplate"> <Viewbox> <Viewbox.Resources> <CollectionViewSource x:Key="viewSource" Source="{Binding Modules}"> <CollectionViewSource.SortDescriptions> <scm:SortDescription PropertyName="ID" Direction="Ascending"/> </CollectionViewSource.SortDescriptions> </CollectionViewSource> </Viewbox.Resources> <ItemsControl ItemsSource="{Binding Source={StaticResource viewSource}}" Height="{DynamicResource equipmentHeight}" ItemTemplate="{StaticResource moduleDataTemplate}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <StackPanel Orientation="Horizontal" /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> </ItemsControl> </Viewbox> </DataTemplate> I have also setup the UserControl where all of this is defined to provide designtime data d:DataContext="{x:Static vm:DesignTimeHelper.Equipment}"> This is basically a static property that gives me an EquipmentViewModel that has a list of ModuleViewModels (Equipment.Modules). Now as long as I bind to the CollectionViewSource the designtime data does not show up in blend 3. When I bind to the ViewModel collection directly <ItemsControl ItemsSource="{Binding Modules}" I can see the designtime data. Any idea what I could do?

    Read the article

  • How do you replace an entire xaml element?

    - by luke
    <ListView> <ListView.Resources> <DataTempalte x:Key="label"> <TextBlock Text="{Binding Label}"/> </DataTEmplate> <DataTemplate x:Key="editor"> <UserControl Content="{Binding Control.content}"/> <!-- This is the line --> </DataTemplate> </ListView.Resources> <ListView.View> <GridView> <GridViewColumn Header="Name" CellTemplate="{StaticResource label}"/> <GridViewColumn Header="Value" CellTemplate="{StaticResource editor}"/> </GridView> </ListView.View> On the marketed line, I'm replacing the contents of a UserControl with the contents of another UserControl that is dynamically created in code. I'd like to replace the entire control, and not just the content. Is there a way to do this?

    Read the article

  • SCD2 + Merge Statement + SQL Server

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My MERGE statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My MERGE Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am I doing wrong ?

    Read the article

  • reaching 99.9999% uptime

    - by user35204
    I am currently developing a project which is mission-critical. The actual domain name is registered with 1 & 1 and I plan on purchasing DynDNS Custom DNS service (which has 5 different geographical locations for DNS) and then another secondary DNS service to make sure my DNS is as failover safe as possible. Does it matter that the registration is with 1 & 1 - are they a weak link in the chain? All I really use them for is to say that DynDNS is my primary DNS nameserver and then my secondary DNS is my other nameserver. I can transfer the registration to DynDNS - Im just not sure if it really matters or not. Thanks

    Read the article

  • How do you align text so it is in the middle of a button in Silverlight?

    - by Roy
    Here is the current code I am using: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="ButtonPrototype.MainPage" Width="640" Height="480"> <UserControl.Resources> <ControlTemplate x:Key="CellTemplate" TargetType="Button"> <Grid> <Border x:Name="CellBorderBrush" BorderBrush="Black" BorderThickness="1"> <ContentPresenter Content="{TemplateBinding Content}" HorizontalAlignment="Center" VerticalAlignment="Center"/> </Border> </Grid> </ControlTemplate> <Style x:Key="CellStyle" TargetType="Button"> <Setter Property="Template" Value="{StaticResource CellTemplate}"></Setter> <Setter Property="Foreground" Value="Black"></Setter> <Setter Property="FontSize" Value="80"></Setter> <Setter Property="Width" Value="100"></Setter> <Setter Property="Height" Value="100"></Setter> </Style> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="White"> <Button Content="A" Style="{StaticResource CellStyle}"></Button> </Grid> </UserControl> The Horizontal aligning works but the verticalalignment doesn't do anything. Thanks for your help.

    Read the article

  • Browser: Cookie lost on refresh

    - by Nirmal
    I am experiencing a strange behaviour of my application in Chrome browser (No problem with other browsers). When I refresh a page, the cookie is being sent properly, but intermittently the browser doesn't seem to pass the cookie on some refreshes. This is how I set my cookie: $identifier = / some weird string /; $key = md5(uniqid(rand(), true)); $timeout = number_format(time(), 0, '.', '') + 43200; setcookie('fboxauth', $identifier . ":" . $key, $timeout, "/", "fbox.mysite.com", 0); This is what I am using for page headers: header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Thu, 25 Nov 1982 08:24:00 GMT"); // Date in the past Do you see any issue here that might affect the cookie handling? Thank you for any suggestion. EDIT-01: It seems that the cookie is not being sent with some requests. This happens intermittently and I am seeing this behaviour for ALL the browsers now. Has anyone come across such situation? Is there any situation where a cookie will not be sent with the request? Thanks again, for any guideline.

    Read the article

  • KeyUp processed for wrong control

    - by Mikael
    I have made a simple test application for the issue, two winforms each containing a button. The button on the first form opens the other form when clicked. It also subscribes to keyup events. The second form has its button set as "AcceptButton" and in the Clicked event we sleep for 1s and then set DialogResult to true (the sleep is to simulate some processing done) When enter is used to close this second form the KeyUp event of the button on the first form is triggered, even though the key was released well before the second had passed so the second form was still shown and focused. If any key other then enter is pressed in the second form the event is not triggered for the button on the first form. First form: public Form1() { InitializeComponent(); buttonForm2.KeyUp += new KeyEventHandler(cntKeyUp); } void cntKeyUp(object sender, KeyEventArgs e) { MessageBox.Show(e.KeyCode.ToString()); } private void buttonForm2_Click(object sender, EventArgs e) { using (Form2 f = new Form2()) { f.ShowDialog(); } } Second form: private void button1_Click(object sender, EventArgs e) { Thread.Sleep(1000); this.DialogResult = DialogResult.OK; } Does anyone know why the event is triggered for the button on the non active form and what can be done to stop this from happening?

    Read the article

  • How do I specify Open ID Realm in spring security ?

    - by Salvin Francis
    We are using Spring security in our application with support for username / password based authentication as well as Open id based authentication. The issue is that google gives a different open id for the return url specified and we have at least 2 different entry points in our application from where open id is configured into our system. Hence we decided to use open id realm. http://blog.stackoverflow.com/2009/0...ue-per-domain/ http://groups.google.com/group/googl...unts-api?pli=1 how is it possible to integrate realm into our spring configuration/code ? This is how we are doing it in traditional openid library code: AuthRequest authReq = consumerManager.authenticate(discovered, someReturnToUrl,"http://www.example.com"); This works and gives same open id for different urls from our site. our configuration: Code: ... <http auto-config="false"> <!-- <intercept-url> tags are here --> <remember-me user-service-ref="someRememberedService" key="some key" /> <form-login login-page="/Login.html" authentication-failure-url="/Login.html?error=true" always-use-default-target="false" default-target-url="/MainPage.html"/> <openid-login authentication-failure-url="/Login.html?error=true" always-use-default-target="true" default-target-url="/MainPage.html" user-service-ref="someOpenIdUserService"/> </http> ... <beans:bean id="someOpenIdUserService" class="com.something.MyOpenIDUserDetailsService"> </beans:bean> <beans:bean id="openIdAuthenticationProvider" class="com.something.MyOpenIDAuthenticationProvider"> <custom-authentication-provider /> <beans:property name="userDetailsService" ref="someOpenIdUserService"/> </beans:bean> ...

    Read the article

  • Migrating SBS 2003 to 2012 standard

    - by AryaW
    My company is currently trying to migrate a Windows Small Business Server 2003 to Windows Server 2012. We know the general procedure, but we want to make sure we aren't going to mess anything up tremendously. Here's the steps we were planning on taking: 1. Uninstall exchange 2. Remove legacy GPO's 3. Demote the domain controller 4. Promote the new server to the primary domain controller. We have no mail servers to worry about. My question is, will the above method work or will we need to make a complete new domain? Thanks!

    Read the article

  • Dedicated Server emails ending up in Junk

    - by Pasta
    I have a dedicated server that works fine. Recently I added a new domain with a new dedicated IP address. The emails from the webserver gets sent out from the primary IP address which is different from the IP address of the domain. This causes the emails to end up in the Junk email folders. Is there anything I can do changing the SMTP server to the new IP address or configuring send mail? I need this for my php server on centos.

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >