Search Results

Search found 17610 results on 705 pages for 'specific'.

Page 317/705 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • New Experts Direct Contribution - Multiple Currency in Analytics

    - by Cheryl
    We do our best to anticipate what you need to know when we design and write our courses for CRM On Demand. But we know that we cannot hit on every situation or implementation scenario that you might encounter. That's why I love our Experts Direct program - this is where we encourage our wide network of CRM On Demand experts to contribute knowledge that they have gained from working directly with companies on their specific challenges or questions. (See Direct From Our Experts!) The latest Experts Direct contribution comes from Leon Dolman, who works with CRM On Demand customers every day. Leon addresses what you should expect to see in your reports and in the application when your company's users enter opportunity revenue information in more than one currency. He works through a scenario to show how currency settings can affect the data that you see in your reports. For example, do you know what will you see in your Opportunity reports if you have two different currencies represented, besides your company's default currency, but your company administrator has only set exchange rates for one of them? Leon knows...and now he has shared that knowledge - and more - with the rest of us. Go to the Multiple Currency in Analytics item in the Training and Support Center to read more - and while you're there, take a look at the other Experts Direct content to tap into that expert knowledge that we're collecting for you. Just click the Browse More Topics link in the Experts Direct box on the home page to see the full list. And let us know if there are other topics that you'd like to see our experts address. Post a comment to start a conversation or send us an email.

    Read the article

  • The newest OPN Competency Center (OPN CC) enhancements are now available

    - by mseika
    The newest OPN Competency Center (OPN CC) enhancements are now available. This release is focused on a new look and simplified navigation, and Resell Competency Tracking functionality. Some of the key features released include: 1. New Look and Feel with Simplified Navigation Users are now one click away from the most valuable resources. Additionally, there are now focused areas which allow users to navigate more effectively. Users can review their individual achievements, create a dedicated Training Plan to broaden their knowledge, or use the new Company Corner to view their company’s achievements. Your view as an Oracle employee has been modified and the Company Corner will provide links to allow access to Partner Workgroups and other links specific to your partner data. 2. Resell Competency Tracker This new functionality has been created to allow partners to track their progress toward becoming Resell Authorized. The Resell Competency Tracker highlights those Knowledge Zones where additional requirements must be achieved prior to Distribution Rights being granted and allows the partners to track their progress. This tracker is available to all users badged to a company ID and also to Internal Oracle employees who have existing access to Partner Workgroups. 3. Enhanced Training Manager Functionality The existing Training Manager has been enhanced to allow partners to create Workgroups that are either focused on the competency requirements for becoming Specialized or the competency requirements needed to apply for resell authorization. Please mark your calendars and plan on joining an internal demonstration of these features and enhancements: Wednesday, September 26 @ 7am & 8pm PT A public Beehive web conference has been scheduled Intercall: 5210981 / 2423

    Read the article

  • Postgresql 9.2 where is the initdb located on Ubuntu

    - by thanikkal
    I am trying to install postgres on EC2 / EBS. I am following this article and stuck at the following step. sudo su - su postgres - /usr/pgsql-9.0/bin/initdb -D /pgdata I cant find the initdb command located at the stated location, matter of fact i cant find the pgsql* directory at all under /usr folder. Was this changed for Postgres 9.2 or is there an alternate command that would help me initdb? edit 1: I know the folder pgsql-9.0 is version specific, so i was expecting to see more like pgsql-9.2 or similar.

    Read the article

  • Forcing logon to Air Watch server upon joining wifi

    - by DKNUCKLES
    I'm setting up a wireless controller that I would like to leave as unsecured. When a user connects to this network they need to be forwarded to a specific page where they can authenticate with the Air Watch system they have in place. Once authentication takes place, a profile will be downloaded to their device and we can administer the devices accordingly. I'm mulling over how I can force the page to the user when they log in. The methodology I'm thinking about working with is creating a NAT rule for that VSC that would forward all port 80 and 443 traffic to the airwatch server. Once they authenticate, a profile will be downloaded which will connect the devices to an Virtual Access Point who's SSID isn't broadcasted. Is this methodology correct or can someone think of an easier / more efficient way of accomplishing this? The controller is an HP MSM720 for what it's worth.

    Read the article

  • Trying to mount NFS share on Windows Machien at startup with Z: letter for all users

    - by ScottC
    Windows Server 2008 We are trying to mount a specific drive letter on a windows machine from a unix machine. We need the mount to be available to the server even if no users are logged in and to users who are logged in with If we run the command from the command prompt manually it conencts and we have access to the NFS share, and can open it and see and edit files. mount -o fileaccess=777 anon \\127.0.0.1\nav z: (ip address replaced with 127.0.0.1 for security reasons) However if we try to automate the task by making an entry in the task schedule for boot time, to execute the batch script, it adds a disconencted drive to the list in 'My Computer' but it is disconencted and when trying to access the drive an error is produced: Z: is not accessible The data area passed to a system call is too small.| Tried as administrator with highest privelidges, as SYSTEM (group) and as my user (adminstator level user) same results. Is there another way to do this? Most of the help I have found online suggest this way but it keeps failing.

    Read the article

  • What are the benefits of full VDI over Remote Desktop Services?

    - by Doug Chase
    We're talking about piloting VDI here, but the more research I do, the more it seems like it would make more sense just to upgrade and expand our TS (RDS) environment. I feel like you can pull off more sessions per core on RDS than on any VDI solution I've looked at. Is this the case? Is there a decision matrix anywhere describing the benefits of using full virtualized desktops over using a remote desktop farm? We need good video performance for clinical imaging - will this work better on one infrastructure or the other? (Does this question have a specific enough answer for it to be on SF? Regardless, I feel like having this here will be helpful for someone in the future...)

    Read the article

  • Customizing post-commit messages in svn for different users

    - by Suresh
    I have an svn repository that users can access (read/write) using their account OR via tunneling over ssh with svnserve. I also have a post-commit hook that sends mails to specific users for different projects via svnnotify: the typical command is svnnotify <params> --to-regex-map <list of email IDs> <regex> For users who have accounts on the system, the notification email is sent from @machine.domain, which is fine. For users coming in via tunnelling, the email gets sent from @machine.domain, which is a fake address since these users don't have an account - the only reason I specify a tunnel-user id is to keep track of who made which update. So my question (finally) is: is there a way to pass a parameter (the "true" email address) to svnserve so that when the post-commit mail is sent, it can be sent "from" the correct email address ? p.s this is my first post here - if I haven't provided sufficient information, apologies: I'm happy to provide more details.

    Read the article

  • Warning: E-Business Suite Issues with Sun JRE 1.6.0_18

    - by Steven Chan
    Users need a Java client to run the Forms-based content in Oracle E-Business Suite.  With Oracle JInitiator 1.3 out of Premier Support as of July 2009, Apps users must run the native Sun Java Runtime Engine (JRE) to access this content.In early 2008 we relaxed our certification and support policy for the use of the native Sun JRE clients with the E-Business Suite. The policy reflected a switch from certifying specific JRE versions for the E-Business Suite to specifying minimum versions, instead. This permits E-Business Suite users to run any JRE release above following minimum certified levels, even later ones that Oracle hasn't explicitly tested with the E-Business Suite: JRE 1.5.0_13 and higherJRE 1.6.0_03 and higherUnder our current policy, Oracle E-Business Suite end-users can upgrade their JRE clients whenever Sun releases a new JRE release on either the 1.5 or 1.6 versions. EBS users do not need to wait for Oracle to certify new JRE 1.5 or 1.6 plug-in updates with the E-Business Suite.Known E-Business Suite Issues with JRE 1.6.0_18We test every new JRE release with both E-Business Suite 11i and 12.  We have identified a number of issues with JRE 1.6.0_18.  If you haven't already upgraded your end-users to JRE 1.6.0_18, we recommend that you to keep them on a prior JRE release such as 1.6.0_17 (6u17).

    Read the article

  • multicast tcpdump and subscriptions

    - by Karoly Horvath
    From the multicast howto: IP_ADD_MEMBERSHIP. Recall that you need to tell the kernel which multicast groups you are interested in. If no process is interested in a group, packets destined to it that arrive to the host are discarded. If you don't do that, you won't see those packets with tcpdump. Is it possible to subscribe to all multicast traffic so I can do a tcpdump for all existing traffic? I would think IGMP doesn't allow this, so probably not.. but maybe you can configure a switch to still send all multicast traffic. Is that possible? Is it possible to do subscription (for a specific IP) with a command line tool? (note: I know how to do this in C.. but would prefer to use an existing tool and not compile a separate program for this)

    Read the article

  • OS X SMB Connection to Windows Server 2008 R2

    - by Tawm
    I support many Macs that connect to an SMB share on a Windows Server 2008 R2 box. Occasionally I find that the Mac will try to connect and fail. The connection process will skip asking for credentials and there are none stored in keychain's password section. The server logs will show that the user tried to authenticate with invalid credentials. There are also no lingering connections from the server's point of view. The work around that I've found is to use an invalid username or a username that isn't the user's in the connection string for SMB so smb://domain;user@server/share instead of 'smb://server/share' this will force it to use the username I specified which the client doesn't have anything stored for. So it will then pop up the login window where the user changes the username to the correct one and user her password to connect happily. Specific computer in question: 15" MBP running Snow Leopard (10.6.7 or 10.6.8)

    Read the article

  • Where in the user profile are the Firefox search engine choices stored?

    - by N Rahl
    We have a large number of user profiles that were created on Ubuntu 10.04 and they had access to Google as a choice in the search bar and Google was the provider for queries typed into the super bar. When logging into these same profiles from Mint 15 client machines, the Google search option does not exist for these users, as is the default for Mint. This setting seems to be user specific, but not a part of the FireFox profile? It seems if it were a part of the FF profile, it would "just work" on Mint for these profiles, so I suspect the configuration may be stored somewhere else in the user's profile? Could someone please tell me where in a user's profile the search engine options are set? We would like to set this once, and then drop this configuration into everyones profile so all of our users don't have to do this manually.

    Read the article

  • Low-cost, Flexible Log Aggregation [closed]

    - by Dan McClain
    I'm starting to have quite the collection of Ubuntu VMs that I must manage. I'm starting to investigate Puppet for managing the configuration of all of them, and apticron to let me know what's out of date. But the issue I feel I should deal with sooner than later is log aggregation. I'd like to stay in the free/open source realm for now, seeing that we don't have much budget for something like splunk yet. In addition to syslog, I would like to collect application specific logs (We are running different apps on different machines, from nginx+passenger for rails, to Apache+Tomcat for java, to PHP for expression engine, and mysql/postgresql database server), so that we can analyze the relavent data. For now, I'm just looking to get all the logs one place.

    Read the article

  • MySQL 5.1.49 freezing every two days

    - by maximus
    Hi all, our mysql system is "freezing" every two days. By "freezing" i mean the following: it doesn't respond to ping we can't login with SSH we don't get any answer from MySQL there is no entry in the error logs! neither from linux neither from MySQL. we have already changed to a completely new hardware, we have the same problem, so it's definitely not a hardware problem. we do not have any other software installed except a firewall (iptables rule) we can restart the server from another server using rsyslog (www.rsyslog.com)(software reset) Could someone help me, by giving me some pointers what could i do to figure out the problem? I have included every detail about our settings. Thank you in advance for your help. Max. Our system parameters and settings: System-Memory: 12GB Processor: Intel 7-920 Quadcore Operating system: Debian 5 (lenny) 64bit MySQL 5.1.49 Databases: (a) a small phpbb forum (b) a 6GB database 3 tables with about 15 million rows my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = our-ip-address # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 32 max_connections = 300 table_cache = 2048 #thread_concurrency = 4 # Used for InnoDB tables recommended to 50%-80% available memory innodb_buffer_pool_size = 6G # 20MB sometimes larger innodb_additional_mem_pool_size = 20M # 8M-16M is good for most situations innodb_log_buffer_size = 8M # Disable XA support because we do not use it innodb-support-xa = 0 # 1 is default wich is 100% secure but 2 offers better performance innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT #innodb_thread_concurency = 8 # Recommended 64M - 512M depending on server size innodb_log_file_size = 512M # One file per table innodb_file_per_table # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M #query_cache_type = 1 #query_cache_min_res_unit= 2K #join_buffer_size = 1M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # * InnoDB plugin # As of MySQL 5.1.38, the InnoDB plugin from Oracle is included in the MySQL source code. # It has many improvements and better performances than the built-in InnoDB storage engine. # Please read http://www.innodb.com/products/innodb_plugin/ for more information. # Uncommenting the two following lines to use the InnoDB plugin. ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # !includedir /etc/mysql/conf.d/ UPDATE After installing sysstat and configuring it to collect data after every minute i have the following datas. I used sar to generate the following output: The log-file is too big so coudn't enter it here but uploaded to box.net. The link is http://www.box.net/shared/xc6rh7qqob SECOND UPDATE We started a ping command in the background, and that solved the problem. Now the server does work since more then a week. We still don't know what's the problem.

    Read the article

  • Drag2Up Brings Multi-Source Drag and Drop Uploading to Firefox

    - by ETC
    Last fall we shared Drag2Up with you, a handy little Chrome extension that make it a snap to drag, drop, and upload files to a variety of file sharing sites. Now that same easy sharing is available for Firefox. Just like the Chrome version the Firefox version adds in super simple drag and drop file sharing to your web browsing experience. Drag images, text, and other file types onto any text box and Drag2Up uploads them to the file sharing service you’ve specified in the settings menu such as Imgur, Imageshack, Pastebin, Hotfile, Droplr, and more. Hit up the link below to read more and grab a copy for your Firefox install. Drag2Up [Mozilla Add-ons] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Never Call Me at Work [Humorous Star Wars Video] Add an Image Properties Listing to the Context Menu in Chrome and Iron Add an Easy to View Notification Badge to Tabs in Firefox SpellBook Parks Bookmarklets in Chrome’s Context Menu Drag2Up Brings Multi-Source Drag and Drop Uploading to Firefox Enchanted Swing in the Forest Wallpaper

    Read the article

  • How to prevent yourself from commenting on websites?

    - by MHH
    There is a bunch of browser add ons to either block particular websites (i.e. leechblock, chrome nanny or various OS specific solutions) or block the comment section of a website (i.e. commentBlocker). However, what if you want to be able to read the comment section of all websites, but want to never be able to add comments yourself, on particular sites? Is there anything that will allow this? I'm particularly interested in answers that will work for both windows and mac, and will also work for google chrome, firefox, and safari (note they can be different solutions for each browser/operating system)

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • Modular programming is the method of programming small task or programs

    Modular programming is the method of programming small task or sub-programs that can be arranged in multiple variations to perform desired results. This methodology is great for preventing errors due to the fact that each task executes a specific process and can be debugged individually or within a larger program when combined with other tasks or sub programs. C# is a great example of how to implement modular programming because it allows for functions, methods, classes and objects to be use to create smaller sub programs. A program can be built from smaller pieces of code which saves development time and reduces the chance of errors because it is easier to test a small class or function for a simple solutions compared to testing a full program which has layers and layers of small programs working together.Yes, it is possible to write the same program using modular and non modular programming, but it is not recommend it. When you deal with non modular programs, they tend to contain a lot of spaghetti code which can be a pain to develop and not to mention debug especially if you did not write the code. In addition, in my experience they seem to have a lot more hidden bugs which waste debugging and development time. Modular programming methodology in comparision to non-mondular should be used when ever possible due to the use of small components. These small components allow business logic to be reused and is easier to maintain. From the user’s view point, they cannot really tell if the code is modular or not with today’s computers.

    Read the article

  • Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications

    - by kimberly.billings
    To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle recently introduced the Oracle Communications Data Model (OCDM). With the OCDM, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the OCDM, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Hong Kong Broadband Network, the fastest growing and second largest broadband service provider in Hong Kong, enhanced its data warehouse using Oracle Communications Data Model. It went live with OCDM within three months, and has increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Read more about HKBN's use of OCDM. Read more about OCDM var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • MySQL and User level logging

    - by Adraen
    I have been looking at logging only certain users activity in MySQL. I found that the logging could be enabled or disabled for all users but one of the service using the db does a lot of queries and therefore I would like to only log specific users. Google told me that a flag can be SET to enable disable logging, however, I cannot modify the service DB connection code and asking every single user to enable logging before they do anything might not be as reliable as I want. So, do you know if there is any way to log only a set of users queries ? Thanks !

    Read the article

  • A Model for Planning Your Oracle BPM 10g Migration by Kris Nelson

    - by JuergenKress
    As the Oracle SOA Suite and BPM Suite 12c products enter beta, many of our clients are starting to discuss migrating from the Oracle 10g or prior platforms. With the BPM Suite 11g, Oracle introduced a major change in architecture with a strong focus on integration with SOA and an entirely new technology stack. In addition, there were fresh new UIs and a renewed business focus with an improved Process Composer and features like Adaptive Case Management. While very beneficial to both technology and the business, the fundamental change in architecture does pose clear migration challenges for clients who have made investments in the 10g platform. Some of the key challenges facing 10g customers include: Managing in-process instance migration and running multiple process engines Migration of User Interfaces and other code within the environment that may not be automated Growing or finding technical staff with both 10g and 12c experience Managing migration projects while continuing to move the business forward and meet day-to-day responsibilities As a former practitioner in a mixed 10g/11g shop, I wrestled with many of these challenges as we tried to plan ahead for the migration. Luckily, there is migration tooling on the way from Oracle and several approaches you can use in planning your migration efforts. In addition, you already have a defined and visible process on the current platform, which will be invaluable as you migrate.  A Migration ModelThis model presents several options across a value and investment spectrum. The goal of the AVIO Migration Model is to kick-start discussions within your company and assist in creating a plan of action to take advantage of the new platform. As with all models, this is a framework for discussion and certain processes or situations may not fit. Please contact us if you have specific questions or want to discuss migrations efforts in your situation. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Kris Nelson,ACM,Adaptive Case Management,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • How can I get a scheduled task to run for a user regardless of which computer the user is logged onto?

    - by Ernst
    I've got a scheduled task that needs to run for a user at a specific time. However, the user sometimes logs onto one machine, the next day onto another, then next week onto yet another. At some pint during the day, the user might have to log onto another machine. How do I get the scheduled task to run regardless of which computer the user is using? I could of course create the task on all computers, but that seems a bit overkill. Running a script on log on (or a group policy) to create the task doesn't seem a good method either. Any ideas? Basically I want the scheduled task to be defined on the user instead of on the computer. If in the end I need to choose between the two options above, which is best?

    Read the article

  • Windows Azure Upgrade Domain

    - by kaleidoscope
    Windows Azure automatically divides your role instances into some “logical” domains called upgrade domains. During upgrade, Azure is updating these domains one by one. This is a by design behavior to avoid nasty situations. Some of the last feature additions and enhancements on the platform was the ability to notify your role instances in case of “environment” changes, like adding or removing being most common. In such case, all your roles get a notification of this change. Imagine if you had 50 or 60 role instances, getting notified all at once and start doing various actions to react to this change. It will be a complete disaster for your service. The way to address this problem is upgrade domains. During upgrade Windows Azure updates them one by one and only the associated role instances to a specific domain get notified of the changes taking place. Only a small number of your role instances will get notified, react and the rest will remain intact providing a seamless upgrade experience and no service disruption or downtime. http://www.kefalidis.me/archive/2009/11/27/windows-azure-ndash-what-is-an-upgrade-domain.aspx   Lokesh, M

    Read the article

  • Third-party open-source projects in .NET and Ruby and NIH syndrome

    - by Anton Gogolev
    The title might seem to be inflammatory, but it's here to catch your eye after all. I'm a professional .NET developer, but I try to follow other platforms as well. With Ruby being all hyped up (mostly due to Rails, I guess) I cannot help but compare the situation in open-source projects in Ruby and .NET. What I personally find interesting is that .NET developers are for the most part severely suffering from the NIH syndrome and are very hesitant to use someone else's code in pretty much any shape or form. Comparing it with Ruby, I see a striking difference. Folks out there have gems literally for every little piece of functionality imaginable. New projects are popping out left and right and generally are heartily welcomed. On the .NET side we have CodePlex which I personally find to be a place where abandoned projects grow old and eventually get abandoned. Now, there certainly are several well-known and maintained projects, but the number of those pales in comparison with that of Ruby. Granted, NIH on the .NET devs part comes mostly from the fact that there are very few quality .NET projects out there, let alone projects that solve their specific needs, but even if there is such a project, it's often frowned upon and is reinvented in-house. So my question is multi-fold: Do you find my observations anywhere near being correct? If so, what are your thoughts on quality and quantitiy of OSS projects in .NET? Again, if you do agree with my thoughts on "NIH in .NET", what do you think is causing it? And finally, is it Ruby's feature set & community standpoint (dynamic language, strong focus on testing) that allows for such easy integration of third-party code?

    Read the article

  • Windows does not detect any modems anymore[HSPA]

    - by Shenal Silva
    By an accident i uninstalled my Connect manager (ZTE). but when i re-install it the virtual CD drive is detected but it does not detect the drivers. When i direct to the folder containing the relevant drivers(.sys files) still it doesnt detect. Then i tried a modem of a different brand Huawei the same problem presists As i understand it is not a device(brand) driver specific issue as it does not detect modem(dongles) of any brand. when i plugged in a different USB device(a pen drive) the driver is detected and it works perfectly. Please help me with the issue i have tried system restore but it doesnt work. I want a repair that works without re-installing windows [Edit] I tried Karan's suggested method with usboblivion it didnt work. now im sure that the issue does not deal with the registry

    Read the article

  • JavaScript local alias pattern

    - by Bertrand Le Roy
    Here’s a little pattern that is fairly common from JavaScript developers but that is not very well known from C# developers or people doing only occasional JavaScript development. In C#, you can use a “using” directive to create aliases of namespaces or bring them to the global scope: namespace Fluent.IO { using System; using System.Collections; using SystemIO = System.IO; In JavaScript, the only scoping construct there is is the function, but it can also be used as a local aliasing device, just like the above using directive: (function($, dv) { $("#foo").doSomething(); var a = new dv("#bar"); })(jQuery, Sys.UI.DataView); This piece of code is making the jQuery object accessible using the $ alias throughout the code that lives inside of the function, without polluting the global scope with another variable. The benefit is even bigger for the dv alias which stands here for Sys.UI.DataView: think of the reduction in file size if you use that one a lot or about how much less you’ll have to type… I’ve taken the habit of putting almost all of my code, even page-specific code, inside one of those closures, not just because it keeps the global scope clean but mostly because of that handy aliasing capability.

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >