Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 199/1205 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • one email have multiple open id , unable to retrive specific open id password?

    - by superUser
    I have multiple OPENID accouts refrencing same email address, now i forget one of my accout's password. and when i tried to recover my password then only one openid accout link sent to my mail address whereas i need another openid password reset link what i have to do?? although i m able to login through gmail, but i want to login through openid. i have mailed already? but no satisfactory answer?? how do i collect all open ID password reset link referencing same email address??

    Read the article

  • Simplest way to respawn configured number of instances of a specific process.

    - by Zwei Steinen
    So we have an app. which we wan to run multiple instance of it in linux. The number should be configurable. We also want that whenever one of the instance disappears, a new one is booted up. I was looking into C based programs, shell script, python script etc. but I was wondering what would be the most simple, easiest way to do it. Are there any tools out there? Can one simply use some linux built-in functionality? Linux distribution is Red Hat.

    Read the article

  • What's the best way to forward traffic on a specific port to another machine?

    - by Ankit
    The setup I have is this: [client01] <-A-> [server01] <-B-> [server02] client01 can access port 9300 on server01 (connection A). server01 can access port 9300 on server02 (connection B). What's the best way to make all traffic on port 9300 to server01 go to port 9300 on server02? I can successfully do this with an ssh tunnel from client01 to server01 to server02, but I don't want to have to run ssh on client01. When I ssh from server01 to server02 forwarding port 9300 (ssh -g -L9300:localhost:9300 server02 on server01), it doesn't work -- am I using the wrong command?

    Read the article

  • Can htpasswd be used to restrict access to a URL rather than a specific folder?

    - by me_here
    I would like to restrict access to certain URLs with htpasswd files, rather than folders, is this possible? For example, I wish to restrict the URL: www.example.com/pages/id/227/Restricted_Page But allow access to other URLs such as: www.example.com/pages/id/100/Normal_Page Is this possible? The "pages" part of the url refers to a pages.php file, and the "id" part is the function name in that file. The reason for wanting to do this is because of migrating existing restrict lists, in the form of htpasswd files from another site. Many thanks.

    Read the article

  • How to redirect a specific url through a proxy for multiple services?

    - by CrystalFire
    I have a website hosted on 000webhost.com for free. I am unable to connect directly to the site because Comcast has blocked a portion of 000webhost's servers for free accounts due to other people hosting malicious content. In order to maintain my website, I cannot use my computer to directly connect to the server. I am wondering if there is a way by which I can specifically forward attempts to access the server through a proxy, transparently. The current system that I am on is Windows, but I also have systems running Mac OSX and Linux, so solutions for any system could be fine. I've found answers which work for http, but I'm looking for a solution which will let me use all the other functions as well, such as ftp and ssh.

    Read the article

  • Redirect specific e-mail address sent to a user, to another user.

    - by Michael Pasqualone
    I need to redirect e-mail within our MTA when the two following criteria are both true: When an e-mail is: Sent from: [email protected] Addressesd to: [email protected] Result: redirect e-mail to [email protected]. I don't want to catch *@isp.com and redirect, and I don't want to redirect all e-mail addressed to [email protected] but only redirect when [email protected] sends [email protected] an e-mail. How do I achieve this within Postfix's configuration. And if it's not possible within Postfix, what may be the best solution?

    Read the article

  • Change "From" email address when sending to a specific domain.

    - by RB
    Hi, I would like to set up Exchange so that when a user (e.g. [email protected]) sends an email to somedomain.com Exchange makes the email appear to have come from [email protected]. This would have to be done on a per-domain basis, and invisibly to the user. So all emails come from [email protected], except emails to somedomain.com, which appear to come from [email protected]. If anyone has any ideas I would really appreciate it. Cheers, RB.

    Read the article

  • Is there a good way to prevent a server from emailing a specific address (we control both servers/apps)?

    - by Bms85smb
    When testing a production app we occasionally need to pull from a live site and perform tests on a development server. There are quite a few email addresses stored in the database that we need to modify every time we restore to the development server. Occasionally someone on my team will miss one and accidentally send an email through the distribution list. The email looks legit because it is coming from a clone, it can cause quite the situation. We have a protocol we follow every time we clone the live app and it has helped a lot but I would feel better if it was impossible for the two servers to communicate. Is there a good way to do this? Can firewall rules block email? Does Plesk have a blacklist?

    Read the article

  • Linq 2 SQL using base class and WCF

    - by Gena Verdel
    Hi all. I have the following problem: I'm using L2S for generating entity classes. All these classes share the same property ID which is autonumber. So I figured to put this property to base class and extend all entity classes from the base one. In order to be able to read the value I'm using the override modifier on this property in each and every entity class. Up to now it's live and kicking. Then I decided to introduce another tier - services using WCF approach. I've modified the Serialization mode to Unidirectional (and added the IsReference=true attribute to enable two directions), also added [DataContract] attribute to the BaseObject class. WCF is able to transport the whole object but one property , which is ID. Applying [DataMember] attribute on ID property at the base class resulted in nothing. Am I missing something? Is what I'm trying to achieve possible at all? [DataContract()] abstract public class BaseObject : IIccObject public virtual long ID { get; set; } [Table(Name="dbo.Blocks")] [DataContract(IsReference=true)] public partial class Block : INotifyPropertyChanging, INotifyPropertyChanged { private static PropertyChangingEventArgs emptyChangingEventArgs = new PropertyChangingEventArgs(String.Empty); private long _ID; private int _StatusID; private string _Name; private bool _IsWithControlPoints; private long _DivisionID; private string _SHAPE; private EntitySet<BlockByWorkstation> _BlockByWorkstations; private EntitySet<PlanningPointAppropriation> _PlanningPointAppropriations; private EntitySet<Neighbor> _Neighbors; private EntitySet<Neighbor> _Neighbors1; private EntitySet<Task> _Tasks; private EntitySet<PlanningPointByBlock> _PlanningPointByBlocks; private EntityRef<Division> _Division; private bool serializing; #region Extensibility Method Definitions partial void OnLoaded(); partial void OnValidate(System.Data.Linq.ChangeAction action); partial void OnCreated(); partial void OnIDChanging(long value); partial void OnIDChanged(); partial void OnStatusIDChanging(int value); partial void OnStatusIDChanged(); partial void OnNameChanging(string value); partial void OnNameChanged(); partial void OnIsWithControlPointsChanging(bool value); partial void OnIsWithControlPointsChanged(); partial void OnDivisionIDChanging(long value); partial void OnDivisionIDChanged(); partial void OnSHAPEChanging(string value); partial void OnSHAPEChanged(); #endregion public Block() { this.Initialize(); } [Column(Storage="_ID", AutoSync=AutoSync.OnInsert, DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true, IsDbGenerated=true)] [DataMember(Order=1)] public override long ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } [Column(Storage="_StatusID", DbType="Int NOT NULL")] [DataMember(Order=2)] public int StatusID { get { return this._StatusID; } set { if ((this._StatusID != value)) { this.OnStatusIDChanging(value); this.SendPropertyChanging(); this._StatusID = value; this.SendPropertyChanged("StatusID"); this.OnStatusIDChanged(); } } } [Column(Storage="_Name", DbType="NVarChar(255)")] [DataMember(Order=3)] public string Name { get { return this._Name; } set { if ((this._Name != value)) { this.OnNameChanging(value); this.SendPropertyChanging(); this._Name = value; this.SendPropertyChanged("Name"); this.OnNameChanged(); } } } [Column(Storage="_IsWithControlPoints", DbType="Bit NOT NULL")] [DataMember(Order=4)] public bool IsWithControlPoints { get { return this._IsWithControlPoints; } set { if ((this._IsWithControlPoints != value)) { this.OnIsWithControlPointsChanging(value); this.SendPropertyChanging(); this._IsWithControlPoints = value; this.SendPropertyChanged("IsWithControlPoints"); this.OnIsWithControlPointsChanged(); } } } [Column(Storage="_DivisionID", DbType="BigInt NOT NULL")] [DataMember(Order=5)] public long DivisionID { get { return this._DivisionID; } set { if ((this._DivisionID != value)) { if (this._Division.HasLoadedOrAssignedValue) { throw new System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException(); } this.OnDivisionIDChanging(value); this.SendPropertyChanging(); this._DivisionID = value; this.SendPropertyChanged("DivisionID"); this.OnDivisionIDChanged(); } } } [Column(Storage="_SHAPE", DbType="Text", UpdateCheck=UpdateCheck.Never)] [DataMember(Order=6)] public string SHAPE { get { return this._SHAPE; } set { if ((this._SHAPE != value)) { this.OnSHAPEChanging(value); this.SendPropertyChanging(); this._SHAPE = value; this.SendPropertyChanged("SHAPE"); this.OnSHAPEChanged(); } } } [Association(Name="Block_BlockByWorkstation", Storage="_BlockByWorkstations", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=7, EmitDefaultValue=false)] public EntitySet<BlockByWorkstation> BlockByWorkstations { get { if ((this.serializing && (this._BlockByWorkstations.HasLoadedOrAssignedValues == false))) { return null; } return this._BlockByWorkstations; } set { this._BlockByWorkstations.Assign(value); } } [Association(Name="Block_PlanningPointAppropriation", Storage="_PlanningPointAppropriations", ThisKey="ID", OtherKey="MasterBlockID")] [DataMember(Order=8, EmitDefaultValue=false)] public EntitySet<PlanningPointAppropriation> PlanningPointAppropriations { get { if ((this.serializing && (this._PlanningPointAppropriations.HasLoadedOrAssignedValues == false))) { return null; } return this._PlanningPointAppropriations; } set { this._PlanningPointAppropriations.Assign(value); } } [Association(Name="Block_Neighbor", Storage="_Neighbors", ThisKey="ID", OtherKey="FirstBlockID")] [DataMember(Order=9, EmitDefaultValue=false)] public EntitySet<Neighbor> Neighbors { get { if ((this.serializing && (this._Neighbors.HasLoadedOrAssignedValues == false))) { return null; } return this._Neighbors; } set { this._Neighbors.Assign(value); } } [Association(Name="Block_Neighbor1", Storage="_Neighbors1", ThisKey="ID", OtherKey="SecondBlockID")] [DataMember(Order=10, EmitDefaultValue=false)] public EntitySet<Neighbor> Neighbors1 { get { if ((this.serializing && (this._Neighbors1.HasLoadedOrAssignedValues == false))) { return null; } return this._Neighbors1; } set { this._Neighbors1.Assign(value); } } [Association(Name="Block_Task", Storage="_Tasks", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=11, EmitDefaultValue=false)] public EntitySet<Task> Tasks { get { if ((this.serializing && (this._Tasks.HasLoadedOrAssignedValues == false))) { return null; } return this._Tasks; } set { this._Tasks.Assign(value); } } [Association(Name="Block_PlanningPointByBlock", Storage="_PlanningPointByBlocks", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=12, EmitDefaultValue=false)] public EntitySet<PlanningPointByBlock> PlanningPointByBlocks { get { if ((this.serializing && (this._PlanningPointByBlocks.HasLoadedOrAssignedValues == false))) { return null; } return this._PlanningPointByBlocks; } set { this._PlanningPointByBlocks.Assign(value); } } [Association(Name="Division_Block", Storage="_Division", ThisKey="DivisionID", OtherKey="ID", IsForeignKey=true, DeleteOnNull=true, DeleteRule="CASCADE")] public Division Division { get { return this._Division.Entity; } set { Division previousValue = this._Division.Entity; if (((previousValue != value) || (this._Division.HasLoadedOrAssignedValue == false))) { this.SendPropertyChanging(); if ((previousValue != null)) { this._Division.Entity = null; previousValue.Blocks.Remove(this); } this._Division.Entity = value; if ((value != null)) { value.Blocks.Add(this); this._DivisionID = value.ID; } else { this._DivisionID = default(long); } this.SendPropertyChanged("Division"); } } } public event PropertyChangingEventHandler PropertyChanging; public event PropertyChangedEventHandler PropertyChanged; protected virtual void SendPropertyChanging() { if ((this.PropertyChanging != null)) { this.PropertyChanging(this, emptyChangingEventArgs); } } protected virtual void SendPropertyChanged(String propertyName) { if ((this.PropertyChanged != null)) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } private void attach_BlockByWorkstations(BlockByWorkstation entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_BlockByWorkstations(BlockByWorkstation entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_PlanningPointAppropriations(PlanningPointAppropriation entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_PlanningPointAppropriations(PlanningPointAppropriation entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_Neighbors(Neighbor entity) { this.SendPropertyChanging(); entity.FirstBlock = this; } private void detach_Neighbors(Neighbor entity) { this.SendPropertyChanging(); entity.FirstBlock = null; } private void attach_Neighbors1(Neighbor entity) { this.SendPropertyChanging(); entity.SecondBlock = this; } private void detach_Neighbors1(Neighbor entity) { this.SendPropertyChanging(); entity.SecondBlock = null; } private void attach_Tasks(Task entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_Tasks(Task entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_PlanningPointByBlocks(PlanningPointByBlock entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_PlanningPointByBlocks(PlanningPointByBlock entity) { this.SendPropertyChanging(); entity.Block = null; } private void Initialize() { this._BlockByWorkstations = new EntitySet<BlockByWorkstation>(new Action<BlockByWorkstation>(this.attach_BlockByWorkstations), new Action<BlockByWorkstation>(this.detach_BlockByWorkstations)); this._PlanningPointAppropriations = new EntitySet<PlanningPointAppropriation>(new Action<PlanningPointAppropriation>(this.attach_PlanningPointAppropriations), new Action<PlanningPointAppropriation>(this.detach_PlanningPointAppropriations)); this._Neighbors = new EntitySet<Neighbor>(new Action<Neighbor>(this.attach_Neighbors), new Action<Neighbor>(this.detach_Neighbors)); this._Neighbors1 = new EntitySet<Neighbor>(new Action<Neighbor>(this.attach_Neighbors1), new Action<Neighbor>(this.detach_Neighbors1)); this._Tasks = new EntitySet<Task>(new Action<Task>(this.attach_Tasks), new Action<Task>(this.detach_Tasks)); this._PlanningPointByBlocks = new EntitySet<PlanningPointByBlock>(new Action<PlanningPointByBlock>(this.attach_PlanningPointByBlocks), new Action<PlanningPointByBlock>(this.detach_PlanningPointByBlocks)); this._Division = default(EntityRef<Division>); OnCreated(); } [OnDeserializing()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnDeserializing(StreamingContext context) { this.Initialize(); } [OnSerializing()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnSerializing(StreamingContext context) { this.serializing = true; } [OnSerialized()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnSerialized(StreamingContext context) { this.serializing = false; } }

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Ext JS how to tell PagingToolbar to use parent Grid storage?

    - by Nazariy
    I'm trying to build application that use single config passed by server as non native JSON (can contain functions). Everything works fine so far but I'm curious why PagingToolbar does not have an option to use parent Grid store? I have tried to set store in my config like this, but without success: {... store:Ext.StoreMgr.lookup('unique_store_id') } Is there any way to do so without writing tons of javascript for each view defining store, grid and other items in my application or at least extend functionality of PaginationToolbar that use options from parent object? UPDATED, Here is short example of server response (minified) { "xtype":"viewport", "layout":"border", "renderTo":Ext.getBody(), "autoShow":true, "id":"mainFrame", "defaults":{"split":true,"useSplitTips":true}, "items":[ {"region":"center", "xtype":"panel", "layout":"fit", "id":"content-area", "items":{ "id":"manager-panel", "region":"center", "xtype":"tabpanel", "activeItem":0, "items":[ { "xtype":"grid", "id":"domain-grid", "title":"Manage Domains", "store":{ "xtype":"arraystore", "id":"domain-store", "fields":[...], "autoLoad":{"params":{"controller":"domain","view":"store"}}, "url":"index.php" }, "tbar":[...], "bbar":{ "xtype":"paging", "id":"domain-paging-toolbar", "store":Ext.StoreMgr.lookup('domain-store') }, "columns":[...], "selModel":new Ext.grid.RowSelectionModel({singleSelect:true}), "stripeRows":true, "height":350, "loadMask":true, "listeners":{ "cellclick":activateDisabledButtons } } ] }, } ] }

    Read the article

  • How can I reliably check client identity whilst making DCOM calls to a C# .Net 3.5 Server?

    - by pionium
    Hi, I have an old Win32 C++ DCOM Server that I am rewriting to use C# .Net 3.5. The client applications sit on remote XP machines and are also written in C++. These clients must remain unchanged, hence I must implement the interfaces on new .Net objects. This has been done, and is working successfully regarding the implementation of the interfaces, and all of the calls are correctly being made from the old clients to the new .Net objects. However, I'm having problems obtaining the identity of the calling user from the DCOM Client. In order to try to identify the user who instigated the DCOM call, I have the following code on the server... [DllImport("ole32.dll")] static extern int CoImpersonateClient(); [DllImport("ole32.dll")] static extern int CoRevertToSelf(); private string CallingUser { get { string sCallingUser = null; if (CoImpersonateClient() == 0) { WindowsPrincipal wp = System.Threading.Thread.CurrentPrincipal as WindowsPrincipal; if (wp != null) { WindowsIdentity wi = wp.Identity as WindowsIdentity; if (wi != null && !string.IsNullOrEmpty(wi.Name)) sCallingUser = wi.Name; } if (CoRevertToSelf() != 0) ReportWin32Error("CoRevertToSelf"); } else ReportWin32Error("CoImpersonateClient"); return sCallingUser; } } private static void ReportWin32Error(string sFailingCall) { Win32Exception ex = new Win32Exception(); Logger.Write("Call to " + sFailingCall + " FAILED: " + ex.Message); } When I get the CallingUser property, the value returned the first few times is correct and the correct user name is identified, however, after 3 or 4 different users have successfully made calls (and it varies, so I can't be more specific), further users seem to be identified as users who had made earlier calls. What I have noticed is that the first few users have their DCOM calls handled on their own thread (ie all calls from a particular client are handled by a single unique thread), and then subsequent users are being handled by the same threads as the earlier users, and after the call to CoImpersonateClient(), the CurrentPrincipal matches that of the initial user of that thread. To Illustrate: User Tom makes DCOM calls which are handled by thread 1 (CurrentPrincipal correctly identifies Tom) User Dick makes DCOM calls which are handled by thread 2 (CurrentPrincipal correctly identifies Dick) User Harry makes DCOM calls which are handled by thread 3 (CurrentPrincipal correctly identifies Harry) User Bob makes DCOM calls which are handled by thread 3 (CurrentPrincipal incorrectly identifies him as Harry) As you can see in this illustration, calls from clients Harry and Bob are being handled on thread 3, and the server is identifying the calling client as Harry. Is there something that I am doing wrong? Are there any caveats or restrictions on using Impersonations in this way? Is there a better or different way that I can RELIABLY achieve what I am trying to do? All help would be greatly appreciated. Regards Andrew

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • MongoDB vs. Redis vs. Cassandra for a fast-write, temporary row storage solution

    - by Mark Bao
    Hi there, I'm building a system that tracks and verifies ad impressions and clicks. This means that there are a lot of insert commands (about 90/second average, peaking at 250) and some read operations, but the focus is on performance and making it blazing-fast. The system is currently on MongoDB, but I've been introduced to Cassandra and Redis since then. Would it be a good idea to go to one of these two solutions, rather than stay on MongoDB? Why or why not? Thank you

    Read the article

  • Problems related to showing MessageBox from non-GUI threads

    - by Hans Løken
    I'm working on a heavily data-bound Win.Forms application where I've found some strange behavior. The app has separate I/O threads receiving updates through asynchronous web-requests which it then sends to the main/GUI thread for processing and updating of application-wide data-stores (which in turn may be data-bound to various GUI-elements, etc.). The server at the other end of the web-requests requires periodic requests or the session times out. I've gone through several attempted solutions of dealing with thread-issues etc. and I've observed the following behavior: If I use Control.Invoke for sending updates from I/O-thread(s) to main-thread and this update causes a MessageBox to be shown the main form's message pump stops until the user clicks the ok-button. This also blocks the I/O-thread from continuing eventually leading to timeouts on the server. If I use Control.BeginInvoke for sending updates from I/O-thread(s) to main-thread the main form's message pump does not stop, but if the processing of an update leads to a messagebox being shown, the processing of the rest of that update is halted until the user clicks ok. Since the I/O-threads keep running and the message pump keeps processing messages several BeginInvoke's for updates may be called before the one with the message box is finished. This leads to out-of-sequence updates which is unacceptable. I/O-threads add updates to a blocking queue (very similar to http://stackoverflow.com/questions/530211/creating-a-blocking-queuet-in-net/530228#530228). GUI-thread uses a Forms.Timer that periodically applies all updates in the blocking queue. This solution solves both the problem of blocking I/O threads and sequentiality of updates i.e. next update will be never be started until previous is finished. However, there is a small performance cost as well as introducing a latency in showing updates that is unacceptable in the long run. I would like update-processing in the main-thread to be event-driven rather than polling. So to my question. How should I do this to: avoid blocking the I/O-threads guarantee that updates are finished in-sequence keep the main message pump running while showing a message box as a result of an update.

    Read the article

  • How do I obtain a new stateful session bean in a servlet thread?

    - by FarmBoy
    I'm experimenting with EJB3 I would like to inject a stateful session bean into a servlet, so that each user that hits the servlet would obtain a new bean. Obviously, I can't let the bean be an instance variable for the servlet, as that will be shared. And apparantly injecting local variables isn't allowed. I can use the new operator to create a bean, but that doesn't seem the right approach. Is there a right way to do this? It seems like what I'm trying to do is fairly straightforward, after all, we would want each new customer to find an empty shopping cart.

    Read the article

  • Best Practise for Stopwatch in multi processors machine?

    - by Ahmed Said
    I found a good question for measuring function performance, and the answers recommend to use Stopwatch as follows Stopwatch sw = new Stopwatch(); sw.Start(); //DoWork sw.Stop(); //take sw.Elapsed But is this valid if you are running under multi processors machine? the thread can be switched to another processor, can it? Also the same thing should be in Enviroment.TickCount. If the answer is yes should I wrap my code inside BeginThreadAffinity as follows Thread.BeginThreadAffinity(); Stopwatch sw = new Stopwatch(); sw.Start(); //DoWork sw.Stop(); //take sw.Elapsed Thread.EndThreadAffinity(); P.S The switching can occur over the thread level not only the processor level, for example if the function is running in another thread so the system can switch it to another processor, if that happens, will the Stopwatch be valid after this switching? I am not using Stopwatch for perfromance measurement only but also to simulate timer function using Thread.Sleep (to prevent call overlapping)

    Read the article

  • emulator crashes

    - by Dave
    I am setting up an Android environment for the first time on Eclipse. I have many years of Eclipse experience, but new to Android. This is being done on an Apple Mac Mini, running MacOSX 10.6.3. I am using the latest Eclipse Classic, version 3.5.2. I am trying to get the tiny hello world program running. When I run it, I get the following in the console window of Eclipse: [2010-06-12 13:48:08 - HelloAndroid] Automatic Target Mode: launching new emulator with compatible AVD 'Android2.2AVD' [2010-06-12 13:48:08 - HelloAndroid] Launching a new emulator with Virtual Device 'Android2.2AVD' [2010-06-12 13:48:11 - HelloAndroid] New emulator found: emulator-5554 [2010-06-12 13:48:11 - HelloAndroid] Waiting for HOME ('android.process.acore') to be launched... [2010-06-12 13:48:12 - Emulator] 2010-06-12 13:48:12.783 emulator[50495:903] Warning once: This application, or a library it uses, is using NSQuickDrawView, which has been deprecated. Apps should cease use of QuickDraw and move to Quartz. [2010-06-12 13:48:19 - HelloAndroid] emulator-5554 disconnected! Cancelling 'com.example.helloandroid.HelloAndroid activity launch'! The emulator crashes with the following info. I have followed all the instructions for running the hello world sample. Anyone have any ideas? Process: emulator [50398] Path: /Users/jeremy/android-sdk-mac_86/tools/emulator Identifier: emulator Version: ??? (???) Code Type: X86 (Native) Parent Process: eclipse [50388] Date/Time: 2010-06-12 13:28:38.595 -0400 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Interval Since Last Report: 363037 sec Crashes Since Last Report: 9 Per-App Crashes Since Last Report: 7 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000000007fd000 Crashed Thread: 4 Thread 0: Dispatch queue: com.apple.main-thread 0 emulator 0x000eed4e helper_set_cp15 + 30 Thread 1: 0 libSystem.B.dylib 0x9020bbd2 __workq_kernreturn + 10 1 libSystem.B.dylib 0x9020c168 _pthread_wqthread + 941 2 libSystem.B.dylib 0x9020bd86 start_wqthread + 30 Thread 2: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x9020cb42 kevent + 10 1 libSystem.B.dylib 0x9020d25c _dispatch_mgr_invoke + 215 2 libSystem.B.dylib 0x9020c719 _dispatch_queue_invoke + 163 3 libSystem.B.dylib 0x9020c4be _dispatch_worker_thread2 + 240 4 libSystem.B.dylib 0x9020bf41 _pthread_wqthread + 390 5 libSystem.B.dylib 0x9020bd86 start_wqthread + 30 Thread 3: 0 libSystem.B.dylib 0x901e635a semaphore_timedwait_signal_trap + 10 1 libSystem.B.dylib 0x90213ea1 _pthread_cond_wait + 1066 2 libSystem.B.dylib 0x90242a28 pthread_cond_timedwait_relative_np + 47 3 com.apple.audio.CoreAudio 0x9056f965 CAGuard::WaitFor(unsigned long long) + 219 4 com.apple.audio.CoreAudio 0x90572997 CAGuard::WaitUntil(unsigned long long) + 289 5 com.apple.audio.CoreAudio 0x90570294 HP_IOThread::WorkLoop() + 1892 6 com.apple.audio.CoreAudio 0x9056fb2b HP_IOThread::ThreadEntry(HP_IOThread*) + 17 7 com.apple.audio.CoreAudio 0x9056fa42 CAPThread::Entry(CAPThread*) + 140 8 libSystem.B.dylib 0x90213a19 _pthread_start + 345 9 libSystem.B.dylib 0x9021389e thread_start + 34 Thread 4 Crashed: 0 emulator 0x00040380 audioInDeviceIOProc + 96 Thread 4 crashed with X86 Thread State (32-bit): eax: 0x00000000 ebx: 0x007fd000 ecx: 0x000001fe edx: 0x0198f3f0 edi: 0x00000200 esi: 0x01119850 ebp: 0x01119800 esp: 0xb020fad0 ss: 0x0000001f efl: 0x00010212 eip: 0x00040380 cs: 0x00000017 ds: 0x0000001f es: 0x0000001f fs: 0x0000001f gs: 0x00000037 cr2: 0x007fd000

    Read the article

  • Tomcat JNDI Connection Pool docs - Random Connection Closed Exceptions

    - by Andy Faibishenko
    I found this in the Tomcat documentation here What I don't understand is why they close all the JDBC objects twice - once in the try{} block and once in the finally{} block. Why not just close them once in the finally{} clause? This is the relevant docs: Random Connection Closed Exceptions These can occur when one request gets a db connection from the connection pool and closes it twice. When using a connection pool, closing the connection just returns it to the pool for reuse by another request, it doesn't close the connection. And Tomcat uses multiple threads to handle concurrent requests. Here is an example of the sequence of events which could cause this error in Tomcat: Request 1 running in Thread 1 gets a db connection. Request 1 closes the db connection. The JVM switches the running thread to Thread 2 Request 2 running in Thread 2 gets a db connection (the same db connection just closed by Request 1). The JVM switches the running thread back to Thread 1 Request 1 closes the db connection a second time in a finally block. The JVM switches the running thread back to Thread 2 Request 2 Thread 2 tries to use the db connection but fails because Request 1 closed it. Here is an example of properly written code to use a db connection obtained from a connection pool: Connection conn = null; Statement stmt = null; // Or PreparedStatement if needed ResultSet rs = null; try { conn = ... get connection from connection pool ... stmt = conn.createStatement("select ..."); rs = stmt.executeQuery(); ... iterate through the result set ... rs.close(); rs = null; stmt.close(); stmt = null; conn.close(); // Return to connection pool conn = null; // Make sure we don't close it twice } catch (SQLException e) { ... deal with errors ... } finally { // Always make sure result sets and statements are closed, // and the connection is returned to the pool if (rs != null) { try { rs.close(); } catch (SQLException e) { ; } rs = null; } if (stmt != null) { try { stmt.close(); } catch (SQLException e) { ; } stmt = null; } if (conn != null) { try { conn.close(); } catch (SQLException e) { ; } conn = null; } }

    Read the article

  • What are the specific names for these OLE controls?

    - by Kris
    I have been working on making a block of code that would enable me to input values into a worksheet, as well as an MSgraph object. I have succeeded in this, but this has just presented me with a new set of problems: What are the control names for changing the visible size as well as the focus of a worksheet? What are the control names for changing/making background colours and borders? How do I create and define new worksheets and MSGraph objects inside the document? My example code so far: Option Explicit Dim objWord 'Word application object Dim objIShape 'Inline shapes object Dim objOLE 'OLE object Set objWord=CreateObject("Word.Application") objWord.Application.Documents.Open("C:\birdy.doc") objWord.Visible=True Set objIShape = objWord.ActiveDocument.InlineShapes Function count_filled_spaces(intOLENo, strRange) 'Activates the the inline shape by number(intOLENo) and defines it as the OLE object objIShape(intOLENo).OLEFormat.Activate Set objOLE = objIShape(intOLENo).OLEFormat.Object 'Detects the ClassType of the inline shape and uses a class specific counter to count which datafields have data Dim strClass, i, p, intSheetno intSheetno = 1 strClass = objIShape(intOLENo).OLEFormat.ClassType i = 0 If Left(strClass, 8) = "MSGraph." then For Each p In objOLE.Application.DataSheet.Range(strRange) If p <> "" Then i = i+1 End If Next ElseIf Left(strClass, 6) = "Excel." then For Each p In objOLE.Worksheets(intSheetno).Range(strRange) If p <> "" Then i = i+1 End If Next objOLE.Worksheets(intSheetno).Range("B" & i+1) = objOLE.Worksheets(intSheetno).Range("B" & i) End if count_filled_spaces = i End Function Dim strRange strRange = InputBox("Lol", "do eeet", "B1:B10") wscript.echo count_filled_spaces(2, strRange) 'objWord.Application.Documents.Save 'objWord.Application.Documents.Close 'objWord.Application.Quit WScript.Quit(0)

    Read the article

  • What specific features of LabView are frustrating to you?

    - by Underflow
    Please bear with me: this isn't a language debate or a flame. It's a real request for opinions. Occasionally, I have to help educate a traditional text coder in how to think in LabVIEW (LV). Often during this process, I get to hear about how LV sucks. Rarely is this insight accompanied by rational observations other than "Language X is just so much better!". While this statement is satisfying to them, it doesn't help me understand what is frustrating them. So, for those of you with LabVIEW and text language experience, what specific things about LV drive you nuts? ------ Summaries ------- Thanks for all the answers! Some of the issues are answered in the comments below, some exist on other sites, and some are just genuine problems with LV. In the spirit of the original question, I'm not going to try to answer all of these here: check LAVA or NI's website, and you'll be pleasantly surprised at how many of these things can be overcome. Unintentional concurrency No access to tradition text manipulation tools Binary-only source code control Difficult to branch and merge Too many open windows Text has cleaner/clearer/more expressive syntax Clean coding requires a lot of time and manipulation Large, difficult to access API/palette system Mouse required File namespacing: no duplicate files with the same name in memory LV objects are natively by-value only Requires dev environment to view code Lack of zoom Slow startup Memory pig "Giant" code is difficult to work with UI lockup is easy to do Trackpads and LV don't mix well String manipulation is graphically bloated Limited UI customization "Hidden" primitives (yes, these exist) Lack of official metaprogramming capability (not for much longer, though) Lack of unicode support [1]: http://www.lavag.org LAVA

    Read the article

  • How to finish a broken data upload to the production Google App Engine server?

    - by WooYek
    I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded? Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check. Here's relevant (IMO) log portion, how to interpret work item numbers? [DEBUG 2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds [DEBUG 2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080] <cut> [DEBUG 2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds [DEBUG 2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170] <cut> [DEBUG 2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds [DEBUG 2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180] [ERROR 2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable

    Read the article

  • How to render a DateTime in a specific format in ASP.NET MVC 3?

    - by Slauma
    If I have in my model class a property of type DateTime how can I render it in a specific format - for example in the format which ToLongDateString() returns? I have tried this... @Html.DisplayFor(modelItem => item.MyDateTime.ToLongDateString()) ...which throws an exception because the expression must point to a property or field. And this... @{var val = item.MyDateTime.ToLongDateString(); Html.DisplayFor(modelItem => val); } ...which doesn't throw an exception, but the rendered output is empty (although val contains the expected value, as I could see in the debugger). Thanks for tips in advance! Edit ToLongDateString is only an example. What I actually want to use instead of ToLongDateString is a custom extension method of DateTime and DateTime?: public static string FormatDateTimeHideMidNight(this DateTime dateTime) { if (dateTime.TimeOfDay == TimeSpan.Zero) return dateTime.ToString("d"); else return dateTime.ToString("g"); } public static string FormatDateTimeHideMidNight(this DateTime? dateTime) { if (dateTime.HasValue) return dateTime.Value.FormatDateTimeHideMidNight(); else return ""; } So, I think I cannot use the DisplayFormat attribute and DataFormatString parameter on the ViewModel properties.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >