Search Results

Search found 25015 results on 1001 pages for 'document management'.

Page 305/1001 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Home Server: cpu virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on VM (or compute instance) management and what would best suit my needs. (I have another question related to the storage management). My use cases are: A backup server: rsync and other services running. A personal cloud server: a kind of owned dropbox system, à la ownCloud. " users foreseen. A media server: streaming videos and displaying photos. Here my environement and wishes: Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine 2-3 VMs foreseen: backup server, owncloud server and media server (optional). Those are only servers, so no graphical console needed (I don't need VirtualBox) By VM I mean a virtualised environment like KVM, Xen, etc. or a compute instance like with OpenStack storage should be "virtualised/cloudified" see my other question. VM should be able to be migrated to another server in the future if performance cannot be fullfilled anymore by the current server It does not matter if installation of such setup is complicated as long as management tools allow for easy maintenance I don't have Windows at home, so solution should be Linux friendly and would be nice to be web based. But native apps are OK too. System should be easy to enhance: by adding a new server to migate some of the VMs to it. So it's really a kind of private cloud on which I could run some Linux OS. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. So Xen, KVM, VitualBox or OpenStack? What would you recommend?

    Read the article

  • Can't Start SQL Server 2005 Agent - Start/Stop Are Not Enabled

    - by DaveB
    We have a brand new install of SQL Server 2005 on a Windows 2008 Server. When using the SQL Server Management Studio (2005 or 2008) from my Windows XP Professional workstation, if I right click on the SQL Server Agent, I get the context menu but the Start and Stop options are not enabled(grayed out). I am using Windows authentication, I am a member of the SysAdmin and Public SQL Server roles. Also, when right clicking on Maintenance Plans and selecting New Maintenance Plan, nothing happens. I was able to create a maintenance plan with the wizard but now am unable to execute it because SQL Server Agent isn't running? From what I was told by an admin who had access to the server, he was able to login to the box using the domain administrator account and start the SQL Server Agent service from the services applet or from the local instance of SQL Server 2005 Management Studio. Even after he started the service, it still didn't appear to be running from my workstation view through the management studio. What do I need to change to allow me to administer the agent and maintenance plans from my workstation? If I wasn't clear about anything, feel free to ask for clarification.

    Read the article

  • PowerConnect 3548p SNTP and web interface not working

    - by Force Flow
    I have been unable to get SNTP and access to the web interface working properly on a Dell PowerConnect 3548p. In the logs, this message appears over and over again: 04-Jan-2000 20:19:29 :%MNGINF-W-ACL: Management ACL drop packet received on interface Vlan 172 from 172.17.0.3 to 172.18.0.10 protocol 17 service Snmp 172 is the management vlan. 172.17.0.3 is the DNS server 172.18.0.10 is the switch's IP address. The DNS server and the switch are located on different subnets and separated by routers. I am unable to access the web interface of the switch from the 172.17.x.x subnet. I can only access the web interface of the switch if I am accessing it from the 172.18.x.x subnet. There is also a managed linksys switch on the 172.18.x.x subnet on the 172 vlan, which has no problem with SNTP. I can also access it from the 172.17.x.x network. So, it stands to reason that this is not a firewall or routing issue, but with the 3548p switch. I suspect the issue is with management permissions/ACLs on the 3348p switch, but that's about as much as I've been able to determine so far. Any ideas?

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • TEMP environment variable occasionally set incorrectly

    - by Roger Lipscombe
    Occasionally, I find my TEMP and TMP environment variables set to C:\Windows\TEMP. They should be set to %USERPROFILE%\AppData\Local\Temp, and are configured correctly in System Properties. This manifests itself as error messages like the following: ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\Windows\TEMP\gb_pz65v.0.cs' could not be found error CS2008: No inputs specified ...which occurs in various .NET applications (in particular Visual Studio 2010 or SQL Server Management Studio). Alternatively, SQL Server Management Studio will report: Value cannot be null. Parameter name: viewInfo (Microsoft.SqlServer.Management.SqlStudio.Explorer) If I run PowerShell elevated, then $env:TEMP is set correctly. If I run PowerShell non-elevated, then it's not. I believe that it should be set correctly in both cases. If not, it's the wrong way round. The same is true for CMD.EXE. Rebooting fixes it, temporarily, until something breaks it again. Presumably something loaded into Explorer.exe is messing with its environment variables, but what? The values in the registry are correct, even while this is happening: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment has TEMP = %SYSTEMROOT%\Temp HKCU\Environment has TEMP = %USERPROFILE%\AppData\Local\Temp By setting a breakpoint on shell32!RegenerateUserEnvironment, I'm able to trap it when it happens, but I still don't know why explorer.exe is reading the wrong environment variables. I can reproduce it consistently by broadcasting a WM_SETTINGCHANGE message (I wrote a one-line C++ program to do this). Watching the activity in Process Monitor shows that explorer.exe doesn't even look at HKCU\Environment. What is going on?

    Read the article

  • What could be wrong with my VLAN?

    - by Matt
    I've got a VLAN 10 setup as a management VLAN. The management VLAN comes off port 48 and links to another set of switches that do not support VLAN's so it was I believe set up as an untagged access port. In the past this was a different brand of switch and this worked fine. However, since changing to the HP V1910-48G series I can't seem to get this working. I must point out that as far as I'm aware it is wired up properly (I can't check this physically as I'm working remote and have asked the tech who's got access to double check for me). Now I don't have a huge amount of experience with VLAN environments but AFAIK this is right. I've set the port 48 (linked to the management switches) as an untagged port with PVID 10 and access link type. Is this all I'd need to do from a configuration perspective to ensure all devices connected to port 48 would end up being on VLAN 10 and not needing to tag their frames. i.e. the tag would be added by the switch before being forwarded.

    Read the article

  • Install Exchange 2013 with DSC

    - by Alain Laventure
    I tried to install Exchange 2013 with the resource windowsProcess in existing Exchange Configuration. All prerequisites are installed (the Exchange Organization still exists). This is my Resource section: WindowsProcess Exchange2013 { Credential=$credential Path= "C:\Sources\Cumulative Update 5 for Exchange Server 2013 (KB2936880)\Setup.exe" Arguments= "/mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /TargetDir:C:\EX2013" Ensure= "Present" } #End Filter } #End Node } # End configuration /* @TargetNode='TargetDSC02' @GeneratedBy=exadmin @GenerationDate=08/02/2014 08:16:03 @GenerationHost=SOURCEDSC02 */ instance of MSFT_Credential as $MSFT_Credential1ref { Password = "Password1"; UserName = "S05\\Exadmin"; }; Exadmin is a member of Orgaganization Management Group and it is also member of Domain Admin Group, to be able to install Exchange When I execute this resource , Exchange Installation Start but after 1 minute the installation stops with this error: Failed [Rule:GlobalServerInstall] [Message:You must be a member of the 'Organization Management' role group or a member of the 'Enterprise Admins' group to continue.] To be sure that the right is really the problem I create a special User with only Administrator right of the Exchange server and with no Exchange Permission I run manually on the new Exchange server .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And I got the Same error that with DSC. After I add my test user in the Organization Management Group and I run again manually .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And the Exchange 2013 installation finish without any error. That prove that the problem with DSC is Permission right.

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • Un-do Windows disk convert HFS+

    - by BLAKE
    Last night, a friend asked my to give him a copy of a word document. He handed me an external hard drive and left. I plugged the hard drive into my file server running Windows Server 2003, opened disk management and clicked OK. (I know that in Windows 2003 you need to manually assign a drive letter to external drives.) I then looked at the drive in disk management and it said that it was unallocated space. I called my friend and he said that there was data on the drive, but he used it with his Mac Book. Aperantly when I clicked OK in disk management I converted the from HFS+ file system to something else. Is there any way to undo the disk convert? I immediately removed the drive, so there was no writing to it. Windows did not format the drive, it just converted it. Is the data still there? All the data recovery programs I have are for windows, can they read the Mac file system? I need to get the data back, what can I do?

    Read the article

  • Volume is no longer showing in Raid Controller BIOS and in Windows

    - by Gordon
    Hi all, I have installed some critical Windows Updates yesterday and now my external RAID Volume no longer shows in Windows Vista x64. All updates went through successfully. From their description, I cannot see how they should relate to the issue, but this is the only change that happened, so who knows. Anyway, here is the details: I have an external eSata enclosure that is running on a SiI4726 controller. I can connect to the controller with it's management utility from the computer the enclosure is connected to. The three drives in the enclosure show up as JBODs. I had those drives configured to be one logical RAID5 drive. RAID management is done through a SiI3132 SoftRaid controller. The Raid Management Utility just shows empty channels where it usually shows the Raid Group. In the Windows Disk Manager, I can see an unknown unitialized device. This is fine according to the setup manual. What it doesn't show is my Raid drive. It's gone. Also, when booting Windows, the BIOS of the controller used to show the RAID volume before booting the OS. This is not happening anymore. Updating drivers and firmware did not help. I have made sure the drivers and firmware are compatible to each others. And like I said, it used to work before. Any clues?

    Read the article

  • Allied Telesis router: IP filtering for the LOCAL interface

    - by syneticon-dj
    Given an Allied Telesis router with an AlliedWare OS (2.9.1) I would like to disable access to all management services of the router except for a number of subnets (or alternatively have what is a "management VLAN" with other manufacturers' switch and router models). What I have tried so far: creating a new VLAN and an appropriate IP interface, setting the LOCAL IP into this subnet, creating an IP filter for the IP interface and specifying my exclusion subnets: it simply does not work as intended as I can access the LOCAL IP set from any of the other VLAN interfaces - the traffic is apparently not going through my defined filter set at all creating a new IP filter set and binding it to the LOCAL IP interface: this seems not to affect any kind of traffic at all, the counters for the filter set remain at zero packets setting the Remote Security Officer Level IP address range: this only restricts the ability for a user with the Security Officer privilege level to log in from any but the specified address ranges / subnets. Unfortunately, it does not prevent service availability (and thus DoS capacity) or the ability to log in as a less privileged user (e.g. a "manager") calling technical support: unfortunately no solution so far What I have not tried: creating a filter set for each and every IP interface defined on the router and excluding access to the router's management IP: I would like to reduce the overhead induced by IP filters as the router already is CPU-constrained at times. Setting up filters for every IP interface would mean that each and every traffic packet would have to pass the filters, thus consuming CPU cycles. If by any means possible, I would like to find a different solution.

    Read the article

  • Un-do Windows disk convert HFS+

    - by BLAKE
    Last night, a friend asked my to give him a copy of a word document. He handed me an external hard drive and left. I plugged the hard drive into my file server running Windows Server 2003, opened disk management and clicked OK. (I know that in Windows 2003 you need to manually assign a drive letter to external drives.) I then looked at the drive in disk management and it said that it was unallocated space. I called my friend and he said that there was data on the drive, but he used it with his Mac Book. Aperantly when I clicked OK in disk management I converted the from HFS+ file system to something else. Is there any way to undo the disk convert? I immediately removed the drive, so there was no writing to it. Windows did not format the drive, it just converted it. Is the data still there? All the data recovery programs I have are for windows, can they read the Mac file system? I need to get the data back, what can I do?

    Read the article

  • Good support to multiple desktops AND multiple monitors in Linux (Ubuntu)?

    - by Somebody still uses you MS-DOS
    I'm starting to have A LOT of opened windows in my machine. Sometimes within a project, I have e-mail/task management/personal e-mail/twitter, and a lot of different opened applications/terminal in my Linux environment. Nowadays I have 4 worspaces: Corporate management (e-mail) and corporate messenger; Work (Documents, Requisites) Dev (Development, All gVim windows, terminal and Firefox for development) Personal (Personal stuff: personal e-mail, delicious, twitter and so on) Sometimes it would be interesting to have different workspaces to projects instead of this configuration I have nowadays that are classes of work (bad name, I know, but I think you got the idea). I'm starting to think about using two monitors: one with Corporate Management, Work and Personal. The second monitor is only the development state: each workspace here is about a project being worked on instead of groups of works like before. A workspace may be implementing different classes for example. My question is: I just want to change to a second monitor using the mouse. I want to still be able to change workspaces in the same monitor using keyboard shortcuts. The keyboard shortcuts wouldn't change monitors, just worskpaces on the same monitor. Does Linux (Ubuntu 10.04 Lucid Lynx) support this envisioned setup? If so, how?

    Read the article

  • How to format my external HDD back to as "removable storage"?

    - by user990106
    Recently I formated my Seagate FreeAgent GoFlex external HDD in Mac OS X using GUID partition table since I wanted to install another Mac OS X onto that external HDD. However I changed my mind after my external HDD being formatted. Now I want to format my external HDD back to NTFS so that I can use it with my Windows 7. However, after I connected my external HDD via USB it didn't show up in my "computer" so I used "Disk Management" to check what's wrong with it. In the "Disk Management" I saw that there was one partition of my external HDD called "EFI partition" and I found that I could not delete this partition in the "Disk Management". So I tried to use "diskpart" in cmd and select the external HDD and commanded "clean". Then the EFI partition was gone and I created new volumn on that external HDD. However, after the volumn being created my external HDD did show up in my "computer" but it is in the "Hard Disk Drive" not in the "Devices with Removable Storage" as it used to be. I'm wondering if I can do anything to it to make it recognized as a "Devices with Removable Storage"?

    Read the article

  • 2008R2 Standard and Hyper-V and Ram Usage (Usable vs Available)

    - by Mark
    A new server was purchased for our development team to start utilizing the full feature set of TFS, namely Lab Management. Because of the need for Lab Management we bought a fairly beefy machine to handle this task and to also act as a build machine. I have been tasked to setup additional features TFS on this machine starting out with a build controller and eventually going towards a full out Lab Management setup using Hyper-V. My question: Upon initially logging I noticed that Windows is registering 64gb but only 32gb available. I know this is a limitation because of licencing since only Standard Edition is installed. Since Hyper-V is another layer that handles the virtualization of guest OS's is Hyper-V able to access this memory? Or is Hyper-V memory usage also limited by 2008 R2 Standard? If Hyper-V can somehow access this memory, is this how it should be setup? Or should the host 2008R2 Standard be upgraded to Enterprise so the Host can utilize the full 64gb? Before I go hog wild and using TFS I wanted to ask some experts so I don't need to reinstall the OS down the road to utilize the additional 32gb. Thanks for any help or links you can share.

    Read the article

  • Announcing the new Oracle Retail Workspace, A Configuration of Oracle WebCenter Spaces 11.1.1.5 for Oracle Retail

    - by Oracle Retail Documentation Team
    For the Oracle Retail 13.2.x enterprise, Oracle Retail Workspace 13.2.4 replaces previous versions of Oracle Retail Workspace. Oracle Retail Workspace 13.2.4 is a supported configuration of Oracle WebCenter Spaces 11.1.1.5 for Oracle Retail. Supported Product Overview In order to provide a next-generation Oracle user engagement platform for the retail industry, Oracle Retail Workspace leverages WebCenter Spaces. Oracle Retail Workspace is not a licensed retail application with any code. Instead, retailers purchase the underlying technology and then leverage the Oracle Retail Workspace Implementation Guide to configure a portal utilizing Oracle WebCenter Spaces. Oracle Retail Workspace has been repositioned as a configuration of Oracle WebCenter Spaces for the following reasons: The Oracle Retail Workspace configuration utilizes the external application functionality and the application navigator taskflow of the Oracle WebCenter Framework to configure Oracle Retail applications in Oracle WebCenter Spaces. The Oracle WebCenter Framework improves IT development cycle times by blending Web 2.0 services, processes, business intelligence, and transactions in an integrated JSF framework. The Oracle WebCenter Spaces 11g offers features provided by the previous versions of Oracle Retail Workspace that enable retailers to leverage a productive portal-based environment. List of Documents The following are included in Workspace 13.2.4, A Configuration of WebCenter Spaces 11.1.1.5 for Oracle Retail Oracle Retail Workspace Release Notes Oracle Retail Workspace Implementation Guide Workspace Retail Library—Unsupported The Oracle Retail Workspace Retail Library is comprised of previously-published accelerator documents and sample code downloads hosted on My Oracle Support. They are not supported, nor are they associated with the support lifecycle of the Workspace application. Doc ID: 1461281.1: Oracle Retail Workspace Retail Library Oracle Retail Workspace Retail Library Reference GuideA set of Micro-Applications that can be used to perform some of the operations of Oracle Retail Merchandising System (RMS) from outside the application. This document describes the functional and technical design details of the Micro-Applications available in this release, including the following and more: Create Regular Item Create Purchase Order Item Transfer Update Vendor Oracle Retail Fashion Planning Bundle Reports documentationThe Oracle Retail Fashion Planning Bundle Reports package includes role-based Oracle Business Intelligence (BI) Enterprise Edition (EE) reports and dashboards that provide an illustrative overview highlighting the Fashion Planning Bundle solutions. These dashboards can be leveraged out-of-the-box or can be used along with the other dashboards and reports that may have already been created to support a specific solution or organizational needs. This package includes dashboards for the Assortment Planning, Item Planning, Item Planning Configured for COE, Merchandise Financial Planning Retail Accounting, and Merchandise Financial Planning Cost Accounting applications. Oracle Retail Accelerators for WebLogic Server 11g Micro-Applications Development TutorialThis tutorial describes how you can create a Micro-Application for the Create a Regular Item task in the Retail Merchandising System (RMS) application using Oracle JDeveloper and ADF. Retail Accelerators: Developing ADF Reports for RPASThis document illustrates how you can use the Oracle Application Development Framework 11g (ADF) to generate reports that provide insights from the Oracle Retail Predictive Application Server (RPAS) based applications. Oracle Retail Accelerators Guide for WebCenter 11gOracle Retail Accelerators Guide for WebCenter 11g describes how you can integrate Oracle Retail applications with Oracle WebCenter Spaces and customize WebCenter Spaces to include custom-developed content. Oracle Retail Accelerators, Developing Oracle BI EE reports on RPAS Domain DataThis document illustrates how you can set up the integration between BI EE and RPAS domains to generate BI EE reports and dashboards for RPAS. Oracle Retail Accelerators, Developing Oracle BI EE Reports on RPAS WorkbooksThis document outlines a process to create real-time Oracle Business Intelligence (BI) Enterprise Edition reports against RPAS workbooks dynamically, as opposed to directly going against the RPAS domain for the data. 

    Read the article

  • Visual Studio 2010 Productivity Power Tool Extensions

    - by ScottGu
    Last month I blogged about the Extension Manager that is built-into VS 2010 – as well as about a cool VS 2010 PowerCommands extension that provides some extra features for Visual Studio.  The Visual Studio 2010 Extension Manager provides an easy way for developers to quickly find and install extensions and plugins that enhance the built-in functionality to VS 2010. New VS 2010 Productivity Power Tools Release Earlier this week Jason Zander announced the availability of a new VS 2010 Productivity Power Tools release that includes a bunch of great new VS 2010 extensions that provide a bunch of cool new functionality for you to take advantage of.  You can download and install the release for free here.  Some of the code editor improvements it provides include: Entire Line Highlighting: Makes it easier to track cursor location within the editor Entire Line Selection: Triple Clicking a line in the code editor now selects the entire line (like with MS Word) Code Block Movement: Use Alt+Up/Down Arrow now moves selected code blocks up/down in the editor Consistent Tabs vs. Spaces: Ensure consistent tab vs. space usage across your projects Colorized Parameters: It is now easier to see/identify method parameters Column Guide: You can now add vertical column guidelines to help with text alignment and sizes Align assignments: Makes it easier to line-up multiple variable assignments within your code HTML Clipboard Support: Copy/paste code from VS into an HTML buffer (useful for blogging!) Ctrl + Click Go to Definition: You can now hold down the Ctrl key and click a type to go to its definition It also includes several tab management improvements for managing document tabs within the IDE: Show Close Button in Tab Well: Shows a close button in document well for the active tab (like VS 2008 did) Colored Tabs: You can now select the color of each document tab by project or by regex Pinned Tabs: Enables you to pin tabs to keep them always visible and available Vertical Tabs: You can now show document tabs vertically to fit more tabs than normal Remove Tabs by Usage Order: Better behavior when adding new tabs and one needs to be hidden for space reasons Sort Tabs by Project: Tabs can be sorted by project they belong to, keeping them grouped together Sort Tabs Alphabetically: Tabs can be sorted alphabetically And last – but not least – it includes a new and improved “Add Reference” dialog: This new Add Reference dialog caches assembly information – which means it loads within a second or two (note: the very first time it still loads assembly data – but it then caches it and makes it fast afterwards). The new Add Reference dialog also now includes searching support – making it easier to find the assembly you are looking for. You can read more about all of the above improvements in Jason’s blog post about the release. New Visualization and Modeling Feature Pack Release Earlier this week we also shipped a new feature pack that adds additional modeling and code visualization features to VS 2010 Ultimate.  You can download it here. The Visualization and Modeling Feature Pack includes a bunch of great new capabilities including: Web Site Visualization: New support for generating a DGML visualization for ASP.NET projects C/C++ Native Code Visualization: New support for generating DGML diagrams for C/C++ projects Generate Code from UML Class Diagrams: You can now generate code from your UML diagrams Create UML Class Diagrams from Code: Create UML diagrams from existing code bases Import UML from XML: Import UML class, sequence, and use case elements from XMI 2.1 files Custom Validation Layer Rules: Write custom code to create, modify, and validate layer diagrams Jason’s blog post covers more about these features as well. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Bind a Wijmo Grid to Salesforce.com Through the Salesforce OData Connector

    - by dataintegration
    This article will explain how to connect to any RSSBus OData Connector with Wijmo's data grid using JSONP. While the example will use the Salesforce Connector, the same process can be followed for any of the RSSBus OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and the Wijmo javascript library. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open a Wijmo sample grid file to edit. This example will use the overview.html grid found in the Samples folder. Step 4: First, we will wrap the jQuery document ready function in a callback function for the JSONP service. In this example, we will wrap this in function called fnCallback which will take a single object args. <script id="scriptInit" type="text/javascript"> function fnCallback(args) { $(document).ready(function () { $("#demo").wijgrid({ ... }); }); }; </script> Step 5: Next, we need to format the columns object in a format that Wijmo's data grid expects. This is done by adding the headerText: element for each column. <script id="scriptInit" type="text/javascript"> function fnCallback(args) { var columns = []; for (var i = 0; i < args.columnnames.length; i++){ var col = { headerText: args.columnnames[i]}; columns.push(col); } $(document).ready(function () { $("#demo").wijgrid({ ... }); }); }; </script> Step 6: Now the wijgrid parameters are ready to be set. In this example, we will set the data input parameter to the args.data object and the columns input parameter to our newly created columns object. The resulting javascript function should look like this: <script id="scriptInit" type="text/javascript"> function fnCallback(args) { var columns = []; for (var i = 0; i < args.columnnames.length; i++){ var col = { headerText: args.columnnames[i]}; columns.push(col); } $(document).ready(function () { $("#demo").wijgrid({ allowSorting: true, allowPaging: true, pageSize: 10, data: args.data, columns: columns }); }); }; </script> Step 7: Finally, we need to add the JSONP reference to our Salesforce Connector's data table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the JSONP URL, you will need to supply a valid table name that you want to connect with Wijmo. In this example, we will connect to the Lead table. You will also need to add authentication options in this step. In the example we will append the authtoken of the user who has access to the Salesforce Connector using the @authtoken query string parameter. IMPORTANT: This is not secure and will expose the authtoken of the user whose authtoken you supply in this step. There are other ways to secure the user's authtoken, but this example uses a query string parameter for simplicity. <script src="http://localhost:8181/sfconnector/data/conn/Lead.rsd?@jsonp=fnCallback&sql:query=SELECT%20*%20FROM%20Lead&@authtoken=<myAuthToken>" type="text/javascript"></script> Step 8: Now, we are done. If you point your browser to the URL of the sample, you should see your Salesforce.com leads in a Wijmo data grid.

    Read the article

  • Oracle Tutor: Installing Is Not Implementing or Why CIO's should care about End User Adoption

    - by emily.chorba(at)oracle.com
    Eighteen months ago I showed Tutor and UPK Productive Day One overview to a CIO friend of mine. He works in a manufacturing business which had been recently purchased by a global conglomerate. He had a major implementation coming up, but said that the corporate team would be coming in to handle the project. I asked about their end user training approach, but it was unclear to him at the time. We were in touch over the course of the implementation project. The major activities were data conversion, how-to workshops, General Ledger realignment, and report definition. The message was "Here's how we do it at corporate, and here's how you are going to do it." In short, it was an application software installation. The corporate team had experience and confidence and the effort through go-live was smooth. Some weeks after cutover, problems with customer orders began to surface. Orders could not be fulfilled in a timely fashion. The problem got worse, and the corporate emergency team was called in. After many days of analysis, the issue was tracked down and resolved, but by then there were weeks of backorders, and their customer base was impacted in a significant way. It took three months of constant handholding of customers by the sales force for good will to be reestablished, and this itself diminished a new product sales push. I learned of these results in a recent conversation with the CIO. I asked him what the solution to the problem was, and he replied that it was twofold. The first component was a lack of understanding by customer service reps about how a particular data item in order entry was to be filled in, resulting in discrepant order data. The second component was that product planners were using this data, along with data from other sources, to fill in a spreadsheet based on the abandoned system. This spreadsheet was the primary input for planning data. The result of these two inaccuracies was that key parts were not being ordered to effectively meet demand and the lead time for finished goods was pushed out by weeks. I reminded him about the Productive Day One approach, and it's focus on methodology and tools for end user training. A more collaborative solution workshop would have identified proper applications use in the new environment. Using UPK to document correct transaction entry would have provided effective guidelines to the CSRs for data entry. Using Oracle Tutor to document the manual tasks would have eliminated the use of an out of date spreadsheet. As we talked this over, he said, "I wish I knew when I started what I know now." Effective end user adoption is the most critical and most overlooked success factor in applications implementations. When the switch is thrown at go-live, employees need to know how to use the new systems to do their jobs. Their jobs are made up of manual steps and systems steps which must be performed in the right order for the implementing organization to operate smoothly. Use Tutor to document the manual policies and procedures, use UPK to document the systems tasks, and develop this documentation in conjunction with a solution workshop. This is the path to develop effective end user training material for a smooth implementation. Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Chuck Jones, Product Manager, Oracle Tutor and BPM

    Read the article

  • Recieving and organizing results without server side script (JavaScript)

    - by Aaron
    I have been working on a very large form project for the past few days. I finally managed to get tables to work properly within a javascript file that opens a new display window. Now the issue at hand is that I can't seem to get CSS code to work within the javascript that I have created. Before everyone starts thinking "just use server side script idiot" I have a few conditions and info about the file: The file is only being ran local due to confidential information risks. Once again no option for server access. The intranet the computers are on are already top security and this wouldn't exactly be a company wide program The code below is obviously just a demo with a simple form... The real file has six pages of highly confidential information Only certain fields on this form will actually be gathered (example: address doesnt appear in the results) The display page will contain data compiled into tables for easier viewing I need to be able to create css commands to easily detect certain information if it applies and along with matching design of the original form Here is the code: <html> <head> <title>Form Example</title> <script LANGUAGE="JavaScript" type="text/javascript"> function display() { DispWin = window.open('','NewWin', 'toolbar=no,status=no,width=800,height=600') message = "<body>"; message += "<table border=1 width=100%>"; message += "<tr>"; message += "<th colspan=2 align=center><font face=stencil color=black><h1>Results</h1><h4>one</h4></font>"; message += "</th>"; message += "</tr>"; message += "<td width=50% align=left>"; message += "<ul><li><b><font face=calibri color=red>NAME:</font></b> " + document.form1.yourname.value + "</UL>" message += "</td>"; message += "<td width=50% align=left>"; message += "<li><b>PHONE: </b>" + document.form1.phone.value + "</ul>"; message += "</td>"; message += "</table>"; message += "<body>"; DispWin.document.write(message); DispWin.document.body.style.cssText = 'color:#blue;'; } </script> </head> <body> <h1>Form Example</h1> Enter the following information: <form name="form1"> <p><b>Name:</b> <input TYPE="TEXT" SIZE="20" NAME="yourname"> </p> <p><b>Address:</b> <input TYPE="TEXT" SIZE="30" NAME="address"> </p> <p><b>Phone: </b> <input TYPE="TEXT" SIZE="15" NAME="phone"> </p> <p><input TYPE="BUTTON" VALUE="Display" onClick="display();"></p> </form> </body> </html> >

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • Any alternative to hide querystring from Html.actionlink on ASP.NET MVC Page?

    - by Madhavi
    Hi I have a page called SearchDcouments.aspx that displays all the matching documents as Hyperlinks in the table format as below. When the User clicks on any particular document, the sample url will be: http://localhost:52483/Home/ShowDocument?docID=280 So that the DocumentID is passed as Querystring to the ControllerMethod ShowDocument and the ID is visible in the URL. Now for Security purposes, I want to hide this way of passing the Querystring parameters. Wondering what are the alternatives to hide the DocID from the URL? Appreciate your responses. Thanks Code in the View: <tbody> <% foreach (var item in Model){ %> <tr> <% string actionTitle = item.DocumentType.ToLower() == "letter" ? "Request References" : "Request Slides"; %> <td> <%= Html.ActionLink(actionTitle, MVC.Home.ShowDocument(item.DocumentID))%> </td> </tr> <% } %> </tbody> Code in the Controller: [Authorize] [HttpGet] public virtual ActionResult ShowDocument(int docID) { Document document = miEntity.GetDocumentByID(docID); switch (document.DocType.ToLower()) { case "slide": SlideRequestViewModel slide = new SlideRequestViewModel(docID); return View(MVC.Home.Views.ShowSlideRequest, slide); case "letter": RefRequestViewModel rrq = new RefRequestViewModel(docID); return DoShowRefRequest(rrq); default: break; } // Here - let's get back home return RedirectToAction(MVC.Home.Default()); }

    Read the article

  • WPF Richtextbox XamlWriter behaviour

    - by Krishna
    I am trying to save some c# source code into the database. Basically I have a RichTextBox that users can type their code and save that to the database. When I copy and paste from the visual studio environment, I would like to preserve the formating etc. So I have chosen to save the FlowDocuments Xaml to the database and set this back to the RichTextBox.Document. My below two function serialise and deserialise the RTB's contents. private string GetXaml(FlowDocument document) { if (document == null) return String.Empty; else{ StringBuilder sb = new StringBuilder(); XmlWriter xw = XmlWriter.Create(sb); XamlDesignerSerializationManager sm = new XamlDesignerSerializationManager(xw); sm.XamlWriterMode = XamlWriterMode.Expression; XamlWriter.Save(document, sm ); return sb.ToString(); } } private FlowDocument GetFlowDocument(string xamlText) { var flowDocument = new FlowDocument(); if (xamlText != null) flowDocument = (FlowDocument)XamlReader.Parse(xamlText); // Set return value return flowDocument; } However when I try to serialise and deserialise the following code, I am noticing this incorrect(?) behaviour using System; public class TestCSScript : MarshalByRefObject { } Serialised XAML is using System; public class TestCSScript : MarshalByRefObject {}{ } Notice the the new set of "{}" What am I doing wrong here... Thanks in advance for the help!

    Read the article

  • Where do all the old programmers go?

    - by Tony Lambert
    I know some people move over to management and some die... but where do the rest go and why? One reason people change to management is that in some companies the "Programmer" career path is very short - you can get to be a senior programmer within a few years. Leaving no way to get more money but to become a manager. In other companies project managers and programmers are parallel career paths so your project manager can be your junior. Tony

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >