Search Results

Search found 21097 results on 844 pages for 'check snmp'.

Page 602/844 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • No sound after clean install 11.10

    - by Jorge
    First of all, sorry to ask this, I'm sure that this was asked so many times before. Second, sorry for the English, it's not my native language. And Third, thank you in advance. So, I hope the follow info will help, here's a log. http://www.alsa-project.org/db/?f=07089caf530494bc4bc23e1d1cd56b3a5fae03c6 I already check 'System - Preferences - Sound'. Here's a screenshot http://i.imgur.com/Ghwnj.png > jorge@jorge-desktop:~$ sudo lshw -class multimedia > *-multimedia > description: Multimedia audio controller > product: VT8233/A/8235/8237 AC97 Audio Controller > vendor: VIA Technologies, Inc. > physical id: 11.5 > bus info: pci@0000:00:11.5 > version: 60 > width: 32 bits > clock: 33MHz > capabilities: pm cap_list > configuration: driver=VIA 82xx Audio latency=0 > resources: irq:22 ioport:e400(size=256) Tried with no results: > sudo apt-get remove --purge alsa-base > sudo apt-get remove --purge pulseaudio > sudo apt-get clean && sudo apt-get autoremove > sudo apt-get install alsa-base > sudo apt-get install pulseaudio > sudo apt-get install ubuntu-desktop Also > sudo gedit /etc/default/grub > > from: > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" > > to: > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash radeon.audio=1" > > sudo update-grub > > And Reboot... without any result. EDIT: I made sure that everything it's fine with aplay -l and lspci -v and lsmod; and checked alsamixer, it's not in mute. Well I'm running out of ideas. Thanks.

    Read the article

  • Error message during update from 13.04 to 13.10

    - by layonhands
    The following was reported after I attempted to report the problem back to Ubuntu: The problem cannot be reported: You have some obsolete package versions installed. Please upgrade the following packages and check if the problem still occurs: ubuntu-release-upgrader-gtk, apport, apport-gtk, apport-symptoms, apt, apt-utils, at-spi2-core, binutils, dbus, gcc-4.7-base, gdb, gir1.2-atk-1.0, gir1.2-gtk-3.0, glib-networking, glib-networking-common, glib-networking-services, gnupg, gpgv, ifupdown, initramfs-tools, initramfs-tools-bin, kmod, libappindicator3-1, libapt-inst1.5, libapt-pkg4.12, libasound2, libatk-bridge2.0-0, libatk1.0-0, libatk1.0-data, libatspi2.0-0, libc-bin, libc6, libcups2, libdbus-1-3, libdbusmenu-glib4, libdbusmenu-gtk3-4, libdrm-intel1, libdrm-nouveau2, libdrm-radeon1, libdrm2, libgail-3-0, libgcc1, libgcrypt11, libglib2.0-0, libglib2.0-data, libgnutls26, libgomp1, libgstreamer-plugins-base1.0-0, libgstreamer1.0-0, libgtk-3-0, libgtk-3-bin, libgtk-3-common, libgudev-1.0-0, libicu48, libindicator3-7, libkmod2, liblcms2-2, libpci3, libplymouth2, libpolkit-agent-1-0, libpolkit-backend-1-0, libpolkit-gobject-1-0, libprocps0, libpython-stdlib, libpython2.7, libpython2.7-minimal, libpython2.7-stdlib, libpython3-stdlib, libpython3.3-minimal, libpython3.3-stdlib, libssl1.0.0, libstdc++6, libtiff5, libudev1, libx11-6, libx11-data, libx11-xcb1, libxcb-dri2-0, libxcb-glx0, libxcb-render0, libxcb-shm0, libxcb1, libxcursor1, libxext6, libxfixes3, libxi6, libxinerama1, libxml2, libxrandr2, libxrender1, libxres1, libxt6, libxtst6, libxxf86vm1, lsb-base, lsb-release, module-init-tools, multiarch-support, openssl, passwd, pciutils, perl, perl-base, perl-modules, plymouth, plymouth-theme-ubuntu-text, policykit-1, procps, python, python-gi, python-minimal, python2.7, python2.7-minimal, python3, python3-apport, python3-distupgrade, python3-gi, python3-minimal, python3-problem-report, python3-software-properties, python3-update-manager, python3.3, python3.3-minimal, rsyslog, shared-mime-info, software-properties-common, software-properties-gtk, tar, tzdata, ubuntu-release-upgrader-core, ubuntu-release-upgrader-gtk, udev, update-manager, update-manager-core, update-notifier, update-notifier-common If this question has already been answered, I'm sorry for the repost, but I would appreciate a link to the fix. Thanks. FYI: Dell Latitude D630, Intel Centrino processor. Also, the updater is currently running what seems to be the update. I will report back when it is done going through its process to let you know if it is in fact the 13.10 update. Update 2: System went through an update, but it wasn't for the OS. I think it was an update for the error message mentioned above. Now the OS update is currently running the 'distribution upgrade' portion of the update. This is further than it had gone before. Again I will report back once this is done to let you know whether or not the update was successful. Final Update: Don't know for sure what happened, but I'm almost sure that the error mentioned above was resolved in the first update prior to the 13.10 update. All set.

    Read the article

  • How to debug lag using Bluetooth connected mouse and A2DP headset?

    - by gertvdijk
    I own a Logitech M555b mouse (since a week) for use with my HP Elitebook 8570w laptop running Kubuntu 12.04. Works fine right after connecting using the KDE Bluetooth control module. However, after some time (seemingly random), it starts to lag. Movements are being delayed for roughly 500ms for a short period of time. Usually it recovers after some time too, but it can take minutes. All actions are being delayed: movements, click, scrolls. Additionally, the movements can be choppy during these times. A workaround that always works for the same short period of time is to disconnect an re-connect the mouse. This can be done using the same KDE Bluetooth control module. What did I try already? Running this at boot time: echo on > `readlink -f /sys/class/bluetooth/hci0`/../../../power/level To disable any power saving features on the Bluetooth hci0 device. Check the mouse's batteries (it's just a week old, other new batteries: same result) Checking logs and kernel messages about Bluetooth-related entries: none aside the expected messages on connect time. I'm running kernel 3.5.0-13-generic as provided in the xorg-edgers PPA. Booting the regular 3.2 Precise kernel results in the same behaviour. Some other information that may help: It happens when no other Bluetooth connections are active on the machine. Similar symptoms also occur on my Bluetooth stereo (A2DP) headset, but then it's audio lagging and skipping. Swapping Bluetooth profiles as described here then helps. Conclusion: it's not the mouse that's faulty. The headset always worked fine using my now dead Thinkpad T61p with built-in Bluetooth. The bluetooth module in my laptop is connected via USB and shows up as Bus 002 Device 003: ID 0a5c:21e1 Broadcom Corp. I'm mobile and several people around me are using Bluetooth at work (A2DP mostly). It also occurs at home, where my neighbours are probably using Bluetooth as well. It could just be radio interference, but I think Bluetooth connections should just hop to another channel. And, moreover, it just works properly instantly when re-connecting. Therefore I think it's a software driver issue and I'd like to debug it. Is there any way to get more verbose logging on the Bluetooth(-hid) modules?

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Create Outlook Appointments from PowerShell

    - by BuckWoody
    I've been toying around with a script to create a special set of calendar objects in Outlook that show when my SQL Server Agent Jobs are scheduled to run. I haven't finished yet, but I thought I would share the part that creates the Outlook Appointments.I have yet to fill a variable with the start and end times, and then loop through that to create the appointments. I'm thinking I'll make the script below into a function, and feed it those variables in a loop. The script below creates a whole new Calendar Folder in Outlook called "SQL Server Agent Jobs". I also use categories quite a bit, so you'll see that too. Caution: If you plan to play with this script, do it on an isolated workstation, not on your "regular" Outlook calendar. Otherwise, you'll have lots of appointments in there that you don't care about!  # Add a new calendar item to a new Outlook folder called "SQL Server Agent Jobs" $outlook = new-object -com Outlook.Application $calendar = $outlook.Session.folders.Item(1).Folders.Item("SQL Server Agent Jobs") $appt = $calendar.Items.Add(1) # == olAppointmentItem $appt.Start = [datetime]"03/11/2010 11:00" $appt.End = [datetime]"03/11/2009 12:00" $appt.Subject = "JobName" $appt.Location = "ServerName" $appt.Body = "Job Details" $appt.Categories = "SQL server Agent Job" $appt.Save()   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Introducing Oracle Retail Mobile Point-of-Service

    - by user801960
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle recently announced the introduction of Oracle Retail Mobile Point-of-Service, a mobile extension to the Oracle Retail Point-of-Service (POS) used by many retailers internationally. Oracle Retail Mobile POS offers wide ranging cost and efficiency benefits by allowing staff resource to be used more effectively whilst also reducing spend associated with fixed POS solutions. For retailers utilising Oracle Retail Stores Solutions, additional benefits can be realised. Oracle Retail Mobile POS works with these solutions to allow store personnel to check in-store inventory, access product information and specifications, and perform tasks such as the printing or emailing of receipts and the activation of gift cards.  As Oracle Retail Mobile POS is an extension of Oracle Retail Point-of-Service, retailers can benefit from seamless integration with existing systems, simple upgrade procedures and seamless delivery across the business. However, the solution’s scalable and flexible architecture also supports multiple mobile operators and systems, so retailers are not locked into particular vendors. As well as being popular with retailers, Mobile POS has also proved to be well liked by consumers as it facilitates improved customer service levels. Retail staff are able to spend more time with consumers on the shop floor, access requested inventory information, and perform tasks that would traditionally have needed to be completed at a fixed cash register. Additional information can be accessed on Oracle Retail Point-of-Service or read the press announcement Oracle Introduces Mobile Point-of-Service for Retailers. Normal 0 false false false EN-US X-NONE X-NONE

    Read the article

  • Explanation of the definition of interface inheritance as described in GoF book

    - by Geek
    I am reading the first chapter of the Gof book. Section 1.6 discusses about class vs interface inheritance: Class versus Interface Inheritance It's important to understand the difference between an object's class and its type. An object's class defines how the object is implemented.The class defines the object's internal state and the implementation of its operations.In contrast,an object's type only refers to its interface--the set of requests on which it can respond. An object can have many types, and objects of different classes can have the same type. Of course, there's a close relationship between class and type. Because a class defines the operations an object can perform, it also defines the object's type . When we say that an object is an instance of a class, we imply that the object supports the interface defined by the class. Languages like c++ and Eiffel use classes to specify both an object's type and its implementation. Smalltalk programs do not declare the types of variables; consequently,the compiler does not check that the types of objects assigned to a variable are subtypes of the variable's type. Sending a message requires checking that the class of the receiver implements the message, but it doesn't require checking that the receiver is an instance of a particular class. It's also important to understand the difference between class inheritance and interface inheritance (or subtyping). Class inheritance defines an object's implementation in terms of another object's implementation. In short, it's a mechanism for code and representation sharing. In contrast,interface inheritance(or subtyping) describes when an object can be used in place of another. I am familiar with the Java and JavaScript programming language and not really familiar with either C++ or Smalltalk or Eiffel as mentioned here. So I am trying to map the concepts discussed here to Java's way of doing classes, inheritance and interfaces. This is how I think of of these concepts in Java: In Java a class is always a blueprint for the objects it produces and what interface(as in "set of all possible requests that the object can respond to") an object of that class possess is defined during compilation stage only because the class of the object would have implemented those interfaces. The requests that an object of that class can respond to is the set of all the methods that are in the class(including those implemented for the interfaces that this class implements). My specific questions are: Am I right in saying that Java's way is more similar to C++ as described in the third paragraph. I do not understand what is meant by interface inheritance in the last paragraph. In Java interface inheritance is one interface extending from another interface. But I think the word interface has some other overloaded meaning here. Can some one provide an example in Java of what is meant by interface inheritance here so that I understand it better?

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

  • Managing Matrix Relationships: Organization Visualization and Navigation

    - by Nancy Estell Zoder
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle is pleased to announce the posting of our latest feature, Matrix Relationship Administration. Our continued investment in our Organization Visualization and Navigation solution is demonstrated with the release of Matrix Relationship Administration as well as the enhancements made to our Org Viewer capabilities. Some of those enhancements include the ability to export to Excel and Visio, Search, Zoom, as well as the addition of Manager Self Service transactions. Matrix relationships are relationships defined by rules or ad hoc. These relationships can include, but are not limited to, product or project affiliations, functional groups including multi dimensional relationships such as when the product, region or even the customer is the profit center. The PeopleSoft solution will enable you to configure how you work in this multi dimensional world to ensure you have the tools to be productive……. For more information, please check out the datasheet available on oracle.com, video on the feature on YouTube or contact your sales representative.

    Read the article

  • How to fix 'grub error file not found' when installing 12.04?

    - by Tomasz Grabowski
    i'm trying to install Ubuntu. I don't know if it is important, but i'm trying to install it on external HDD. In the end i have external bootable HDD which only displays: error: file not found grub recovery> From the beginning: I've downloaded ubuntu-12.04-desktop-i386.iso I've used LiLi USB Creator (LinuxLive) to create bootable pendrive from that image I've bootet from it, it works I've clicked "Try ubuntu", it works too. I've used GParted to look over drivers (disks) My primary embedded disk is seen as /dev/sda My attached external disk as /dev/sdb My PenDrive as /dev/sdc I've created partitions on /dev/sdb Fist partition for system (over 200GiB) Second was there already (it's xsf, and i don't want to touch it :P) Third is extended partition, with 1 locital partiton (10GiB) for swap I've started installation i've choose "somethin else" in ... i belive secound screeb then is selected /dev/sdb as boot disk for first partiton of /dev/sdb i set i want ext3 file system, i've check "formattin" checkbox, and mount path set to "/" firs logical partiton set as swap partition After installation finished, i restarted my computer. When i boot from my primary disc it's work ok, my previous operating system - vista - works ok. When i set my BIOS to boot from my external disc, i only get that message: error: file not found grub recovery> I've try to reinstall it, but didn't help... In desperation, i've try to read a bit about that "grub recovery" command-line and experiment a bit... I'm not sure if this has had any point, or if it give you some information (notice, that i don't know what i'm doing :P ) when i've type command: insmod (hd1,1)/boot/grub/linux.mod i've get message: unknown filesystem the same with: insmod (hd1,msdos1)/boot/grub/linux.mod the same with: insmod ext3 but i get no message after command: insmod ext2 ... notice that i really don't know what this command exactly do, but than i thought that maybe if i reinstall ubuntu with ext2 filesystem, it will work. I've done that, but symptoms are the same. I've go back to that Live version of ubuntu, filesystem and basics directories seems to be present on /dev/sdb1 ... i'm completely unfamiliar with GRUB. I'm also don't know which wersion of GRUB it is, i hope there is only one version on ubuntu-12.04-desktop-i386.iso Any help? Thax

    Read the article

  • Accessing SSRS Report Manager on Windows 7 and Windows 2008 Server

    - by Testas
      Here is a problem I was emailed last night   Problem   SSRS 2008 on Windows 7 or Windows 2008 Server is configured with a user account that is a member of the administrator's group that cannot access report Manager without running IE as Administrator and adding  the SSRS server into trusted sites. (The Builtin administrators account is by default made a member of the System Administrator and Content Manager SSRS roles).   As a result the OS limits the use of using elevated permissions by removing the administrator permissions when accessing applications such as SSRS     Resolution - Two options   Continue to run IE as administrator, you must still add the report server site to trusted sites Add the site to trusted sites and manually add the user to the system administrator and content manager role Open a browser window with Run as administrator permissions. From the Start menu, click All Programs, right-click Internet Explorer, and select Run as administrator. Click Allow to continue. In the URL address, enter the Report Manager URL. Click Tools. Click Internet Options. Click Security. Click Trusted Sites. Click Sites. Add http://<your-server-name>. Clear the check box Require server certification (https:) for all sites in this zone if you are not using HTTPS for the default site. Click Add. Click OK. In Report Manager, on the Home page, click Folder Settings. In the Folder Settings page, click Security. Click New Role Assignment. Type your Windows user account in this format: <domain>\<user>. Select Content Manager. Click OK. Click Site Settings in the upper corner of the Home page. Click security. Click New Role Assignment. Type your Windows user account in this format: <domain>\<user>. Select System Administrator. Click OK. Close Report Manager. Re-open Report Manager in Internet Explorer, without using Run as administrator.   Problems will also exist when deploying SSRS reports from Business Intelligence Development Studio (BIDS) on Windows  7 or Windows 2008, therefore you should run Business Intelligence Development Studio as Administor   information on this issue can be found at <http://msdn.microsoft.com/en-us/library/bb630430.aspx>

    Read the article

  • RPi and Java Embedded GPIO: Sensor Connections for Java Enabled Interface

    - by hinkmond
    Now we're ready to connect the hardware needed to make a static electricity sensor for the Raspberry Pi and use Java code to access it through a GPIO port. First, very carefully bend the NTE312 (or MPF-102) transistor "gate" pin (see the diagram on the back of the package or refer to the pin diagram on the Web). You can see it in the inset photo on the bottom left corner. I bent the leftmost pin of the NTE312 transistor as I held the flat part toward me. That is going to be your antenna. So, connect one of the jumper wires to the bent pin. I used the dark green jumper wire (looks almost black; coiled at the bottom) in the photo. Then push the other 2 pins of the transistor into your breadboard. Connect one of the pins to Pin # 1 (3.3V) on the GPIO header of your RPi. See the diagram if you need to glance back at it. In the photo, that's the orange jumper wire. And connect the final unconnected transistor pin to Pin # 22 (GPIO25) on the RPi header. That's the blue jumper wire in my photo. For reference, connect the LED anode (long pin on a common anode LED/short pin on a common cathode LED, check your LED pin diagram) to the same breadboard hole that is connecting to Pin # 22 (same row of holes where the blue wire is connected), and connect the other pin of the LED to GROUND (row of holes that connect to the black wire in the photo). Test by blowing up a balloon, rubbing it on your hair (or your co-worker's hair, if you are hair-challenged) to statically charge it, and bringing it near your antenna (green wire in the photo). The LED should light up when it's near and go off when you pull it away. If you need more static charge, find a co-worker with really long hair, or rub the balloon on a piece of silk (which is just as good but not as fun). Next blog post is where we do some Java coding to access this sensor on your RPi. Finally, back to software! Ha! Hinkmond

    Read the article

  • Desktop Fun: Happy New Year Wallpaper Collection [Bonus Edition]

    - by Asian Angel
    As this year draws to a close, it is a time to reflect back on what we have done this year and to look forward to the new one. To help commemorate the event we have put together a bonus size edition of Happy New Year wallpapers for your desktops. Extra Note: We made a special effort to find wallpapers for this collection without the year “printed” on them, thus allowing for reuse as desired and/or needed beyond the 2010 – 2011 holiday. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more New Year’s desktop goodness be sure to check out our Happy New Year icon & font packs collection (link at bottom)! Note: This wallpaper will need to be placed on a larger white background in order to increase the height. Note: This wallpaper will need to be placed on a larger background in order to increase the width and height. Note: This wallpaper comes in multiple sizes and will need to be downloaded as a zip file. Note: This wallpaper comes in multiple sizes and will need to be downloaded as a zip file. Note: The download size for the original version of this wallpaper is 15 MB. Note: The download size for the original version of this wallpaper is 15 MB. More Happy New Year Fun Desktop Fun: Happy New Year Icon and Font Packs For more wallpapers be certain to see our great collections in the Desktop Fun section. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? The Outdoor Lights Scene from National Lampoon’s Christmas Vacation [Video] The Famous Home Alone Pizza Delivery Scene [Classic Video] Chronicles of Narnia: The Voyage of the Dawn Treader Theme for Windows 7 Cardinal and Rabbit Sharing a Tree on a Cold Winter Morning Wallpaper An Alternate Star Wars Christmas Special [Video] Sunset in a Tropical Paradise Wallpaper

    Read the article

  • SQL Azure and Trust Services

    - by BuckWoody
    Microsoft is working on a new Windows Azure service called “Trust Services”. Trust Services takes a certificate you upload and uses it to encrypt and decrypt sensitive data in the cloud. Of course, like any security service, there’s a bit more to it than that. I’ll give you a quick overview of how you can use this product to protect data you send to SQL Azure. The primary issue with storing data in the cloud is that you are in an environment that isn’t under your control – in fact, that’s the benefit of being in a distributed computing environment in the first place. On premises you’re able to encrypt data you don’t want anyone else to see, using various methods such as passwords (not very strong) or certificates (stronger). When you use a certificate, it’s vital that you create (or procure) and protect it yourself. When you store data remotely, regardless of IaaS, PaaS or SaaS, you don’t own the machines where the data lives. That means if you use a certificate from the cloud vendor to encrypt the data, you have to trust that the data won’t be accessed by the vendor. In some cases having a signed agreement with the vendor that they won’t access your data is sufficient, in other cases that doesn’t meet the requirements your system has for security. With the new Trust Services service, the basic process is that you use a Portal to create a Trust Server using policies and other controls. You place a X.509 Certificate you create or procure in that server. Using the Software development Kit (SDK), the developer has access to an Application Layer Encryption Framework to set fields of data they want to encrypt. From there, the data can be stored in SQL Azure as a standard field – only it is encrypted before it ever arrives. The portion of the client software that decrypts the data uses the same service, so the authenticated user sees the data if they are allowed to do so. The data remains encrypted “at rest”.  You can learn more about this product and check it out in the SQL Azure labs at Microsoft Codename "Trust Services"

    Read the article

  • What can a Service do on Windows?

    - by Akemi Iwaya
    If you open up Task Manager or Process Explorer on your system, you will see many services running. But how much of an impact can a service have on your system, especially if it is ‘corrupted’ by malware? Today’s SuperUser Q&A post has the answers to a curious reader’s questions. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Forivin wants to know how much impact a service can have on a Windows system, especially if it is ‘corrupted’ by malware: What kind malware/spyware could someone put into a service that does not have its own process on Windows? I mean services that use svchost.exe for example, like this: Could a service spy on my keyboard input? Take screenshots? Send and/or receive data over the internet? Infect other processes or files? Delete files? Kill processes? How much impact could a service have on a Windows installation? Are there any limits to what a malware ‘corrupted’ service could do? The Answer SuperUser contributor Keltari has the answer for us: What is a service? A service is an application, no more, no less. The advantage is that a service can run without a user session. This allows things like databases, backups, the ability to login, etc. to run when needed and without a user logged in. What is svchost? According to Microsoft: “svchost.exe is a generic host process name for services that run from dynamic-link libraries”. Could we have that in English please? Some time ago, Microsoft started moving all of the functionality from internal Windows services into .dll files instead of .exe files. From a programming perspective, this makes more sense for reusability…but the problem is that you can not launch a .dll file directly from Windows, it has to be loaded up from a running executable (exe). Thus the svchost.exe process was born. So, essentially a service which uses svchost is just calling a .dll and can do pretty much anything with the right credentials and/or permissions. If I remember correctly, there are viruses and other malware that do hide behind the svchost process, or name the executable svchost.exe to avoid detection. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • 2010 is gone and Welcome 2011

    - by anirudha
    last days i spent my week @ firozabadthe town is much small and near to agraso i never forget to see the taj mahal and red fort their even it’s first chance to see them.i make a plan that i go to Agra last Saturday. firstly i go to red fort and i talking with many foreigner and they love to talking with me because their is only one man who with with them who is their GUIDE a person like a  book they never can talk with you but tell you about everything of the location because you buy them. their are many person come from various country such as German , Japan,  Russ , Italy and many other. their is no problem to talk with them perhaps they happen with talk to me. when i completely watch the Red fort at least i see a girl who are look like a foreigner. i talk themselves where they come from they tell me Francewhen i go elsewhere i thing to propose them to be  a friend of mine. i never propose any girl for friendship with me even in school and college. so i propose them to be a friend of mine.  they accept it i put the email ID in their hand whenever they gone. but i still not get their mail. 2ndly i go to Taj mahal the taj experience is not so good i spent 3 or 4 hours in rush. i found their is no security even their are many army force. they all person are too slow to work. they spent 10 minute to check  a person for security . their hands work very slow just like a low configuration computer. i talk many person their too. i talk to a person who tell themselves Jacob and they from Chicago. they speak very fast and i not know what they tell in speech. a another problem i got with some Chinese person. when i talking with them that i found they speak only Chinese language. Wish you a very very happy new year.

    Read the article

  • Set Covering : Runtime hang\error at function call in c

    - by EnthuCrazy
    I am implementing a set covering application which uses cover function int cover(set *skill_list,set *player_list,set *covering) Suppose skill_set={a,b,c,d,e}, player_list={s1,s2,s3} then output coverin ={s1,s3} where say s1={a,b,c}, s3={d,e} and s2={b,d}. Now when I am calling this function it's hanging at run (set_cover.exe stopped working). Here is my cover function: typedef struct Spst_{ void *key; set *st; }Spst; int cover(set *skill_list,set *player_list,set *covering) { Liste *member,*max_member; Spst *subset; set *intersection; void **data; int max_size; set_init(covering); //to initialize set covering initially while(skill_list->size>0&&player_list->size>0) { max_size=0; for(member=player_list->head;member!=NULL;member=member->next) { if(set_intersection(intersection,((Spst *)(member->data))->st,skill_list)!=0) return -1; if(intersection->size>max_size) { max_member=member; max_size=intersection->size; } set_destroy(intersection); //at the end of iteration } if(max_size==0) //to check for no covering return -1; subset=(Spst *)max_member->data; //to insert max subset from play list to covering set set_inselem(covering,subset); for(member=(((Spst *)max_member->data)->st->head);member!=NULL;member=member->next) //to rem elem from skill list { data=(void **)member->data; set_remelem(skill_list,data); } set_remelem(player_list,(void **)subset); //to rem subset from set of subsets play list } if(skill_list->size>0) return -1; return 0; } Now assuming I have defined three set type sets(as stated above) and calling from main as cover(skills,subsets,covering);=> runtime hang Here Please give inputs on the missing link in this or the prerequisites for a proper call to this function type required. EDIT: Assume other functions used in cover are tested and working fine.

    Read the article

  • What's the best way to install the GD graphics library for Nagios?

    - by user1196
    While trying to install Nagios 3.2.3, I ran their ./configure script and got these errors: checking for main in -liconv... no checking for gdImagePng in -lgd (order 1)... no checking for gdImagePng in -lgd (order 2)... no checking for gdImagePng in -lgd (order 3)... no checking for gdImagePng in -lgd (order 4)... no *** GD, PNG, and/or JPEG libraries could not be located... ********* Boutell's GD library is required to compile the statusmap, trends and histogram CGIs. Get it from http://www.boutell.com/gd/, compile it, and use the --with-gd-lib and --with-gd-inc arguments to specify the locations of the GD library and include files. NOTE: In addition to the gd-devel library, you'll also need to make sure you have the png-devel and jpeg-devel libraries installed on your system. NOTE: After you install the necessary libraries on your system: 1. Make sure /etc/ld.so.conf has an entry for the directory in which the GD, PNG, and JPEG libraries are installed. 2. Run 'ldconfig' to update the run-time linker options. 3. Run 'make clean' in the Nagios distribution to clean out any old references to your previous compile. 4. Rerun the configure script. NOTE: If you can't get the configure script to recognize the GD libs on your system, get over it and move on to other things. The CGIs that use the GD libs are just a small part of the entire Nagios package. Get everything else working first and then revisit the problem. Make sure to check the nagios-users mailing list archives for possible solutions to GD library problems when you resume your troubleshooting. ******************************************************************** Which package do I want? libgd2-xpm-dev? libgd2-noxpm-dev? php5-gd? I'm not looking to do any image processing myself - I just want to get Nagios working.

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >