Search Results

Search found 18347 results on 734 pages for 'generate password'.

Page 568/734 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Can't access local network when connect to pppoe

    - by shantanu
    I am using DSL(PPPOE) connection in ubuntu. It has two part (I am not sure), when I just connect the cable, system automatically get an IP address started with 172.x.x.x(DHCP). When I connect using username/password (PPPOE) I get another IP started with 10.x.x.x and can access internet but can't access some local IP (in my LAN), which are some FTP, media server provided by my ISP. I complained about that to my ISP but they reply Windows is working It's true, Windows 7 is working fine with this settings. I can access internet and local server at the same time. Also I use a WIFI router (TP-link TL-WR340G/TL-WR340GD) which result the same problem. So when I connect cable directly to system and use Windows 7 than everything is fine. Otherwise problem. Similar problem discussed here. Edit before connect. route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.100.0.1 0.0.0.0 UG 0 0 0 eth0 172.100.0.0 0.0.0.0 255.255.0.0 U 1 0 0 eth0 after connect. Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.12.44.91 0.0.0.0 UG 0 0 0 ppp0 10.12.44.91 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 ifconfig after connect eth0 Link encap:Ethernet HWaddr 74:d0:2b:d5:b3:6c inet6 addr: fe80::76d0:2bff:fed5:b36c/64 Scope:Link inet6 addr: 2002:ac64:154:c:76d0:2bff:fed5:b36c/64 Scope:Global inet6 addr: fec0::c:76d0:2bff:fed5:b36c/64 Scope:Site UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:26582 errors:0 dropped:18 overruns:0 frame:0 TX packets:2340 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2542063 (2.5 MB) TX bytes:244938 (244.9 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:4118 errors:0 dropped:0 overruns:0 frame:0 TX packets:4118 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:336759 (336.7 KB) TX bytes:336759 (336.7 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:10.12.44.95 P-t-P:10.12.44.91 Mask:255.255.255.255 inet6 addr: fe80::a536:c7ae:e079:d88d/10 Scope:Link UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:689 errors:0 dropped:0 overruns:0 frame:0 TX packets:744 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:385746 (385.7 KB) TX bytes:75296 (75.2 KB) I used network manager to create network(DSL connection)

    Read the article

  • Lesser Known NHibernate Session Methods

    - by Ricardo Peres
    The NHibernate ISession, the core of NHibernate usage, has some methods which are quite misunderstood and underused, to name a few, Merge, Persist, Replicate and SaveOrUpdateCopy. Their purpose is: Merge: copies properties from a transient entity to an eventually loaded entity with the same id in the first level cache; if there is no loaded entity with the same id, one will be loaded and placed in the first level cache first; if using version, the transient entity must have the same version as in the database; Persist: similar to Save or SaveOrUpdate, attaches a maybe new entity to the session, but does not generate an INSERT or UPDATE immediately and thus the entity does not get a database-generated id, it will only get it at flush time; Replicate: copies an instance from one session to another session, perhaps from a different session factory; SaveOrUpdateCopy: attaches a transient entity to the session and tries to save it. Here are some samples of its use. ISession session = ...; AuthorDetails existingDetails = session.Get<AuthorDetails>(1); //loads an entity and places it in the first level cache AuthorDetails detachedDetails = new AuthorDetails { ID = existingDetails.ID, Name = "Changed Name" }; //a detached entity with the same ID as the existing one Object mergedDetails = session.Merge(detachedDetails); //merges the Name property from the detached entity into the existing one; the detached entity does not get attached session.Flush(); //saves the existingDetails entity, since it is now dirty, due to the change in the Name property AuthorDetails details = ...; ISession session = ...; session.Persist(details); //details.ID is still 0 session.Flush(); //saves the details entity now and fetches its id ISessionFactory factory1 = ...; ISessionFactory factory2 = ...; ISession session1 = factory1.OpenSession(); ISession session2 = factory2.OpenSession(); AuthorDetails existingDetails = session1.Get<AuthorDetails>(1); //loads an entity session2.Replicate(existingDetails, ReplicationMode.Overwrite); //saves it into another session, overwriting any possibly existing one with the same id; other options are Ignore, where any existing record with the same id is left untouched, Exception, where an exception is thrown if there is a record with the same id and LatestVersion, where the latest version wins SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • Do you care about your Oracle System Support experience?

    - by user12244613
    It has been a while since I blogged about Systems Support within Oracle. I want to take this opportunity to raise awareness of how Oracle is communicating out to its systems customers. Previously every item to be communicated was sent independently via an email message however, not all messages appear to be being getting the attention they require. In an effort to ensure Oracle is reaching all of our Sun and Oracle System customers, we have created the Oracle Systems Support Newsletter. This monthly newsletter will have a summary of customer support relevant information for you to use and will cover topics that impact your support experience. For example: 1. Did you know that sending explorer content to email addresses with @sun.com is going away soon? For more information, review the Document 1362484.1 2. Are you an Auto Service Request (ASR) user? If yes, here are the latest changes: · ASR Manager accepts My Oracle Support User Name (email address) and password. [Doc ID 1345484.1] · ASR IP Address for secure file transfer has changed [Doc ID 1338575.1] · ASR No Heartbeat Status - Find out how to resolve [Doc ID 1346328.1] 3. Did you notice we have changed the Service Request options for Hardware and introduced a new problem category called “Automated Diagnosis”? This service streamlines the data you send in and then automatically provides an update of known issues found in your My Oracle Support Service Request. This feature also fast tracks hardware failures by sending parts as soon as the data is analyzed. Have you used this new feature? If yes tell us about it – take the 5minute survey 4. Are you being proactive or are you still ‘fire fighting’ in the reactive mode? If you are being proactive for your Oracle System products you might have used Oracle Sun System Analysis. Did you finding this helpful? Can we improve it? You tell us, take the 5minute survey 5. Are you aware that if you attach files to your Service Request it enables the support engineer to start work straight away? For a summary of products and files review the Newsletter. 6. Are you struggling to find patches or firmware or product downloads? If yes, these types of issues are all addressed in the Newsletter. If this is the type of information you want to know about each month, then take time to read the Newsletter link and bookmark it in My Oracle Support so you can stay informed. Thanks for your time.

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Broken package after update: linux-headers, error brokencount >0

    - by escozul
    Ubuntu 12.04. After an update, I get a red warning icon in the system tray, warning about an error: broken count 0 Opening Update manager, I see that the broken package is linux-headers-3.2.0-33-generic-pae (new install) Specificaly I have my ubuntu on an AspireOne with 8gb internal storage. I tried apt-get clean as suggested in another question on this site, and tried reinstalling the package in Synaptic. I have tried to reboot but to no avail. I have also tried apt-get install --fix-broken and I get the following: sudo apt-get install --fix-broken [sudo] password for elina: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-headers-3.2.0-33-generic-pae The following NEW packages will be installed: linux-headers-3.2.0-33-generic-pae 0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded. 1 not fully installed or removed. Need to get 0 B/977 kB of archives. After this operation 11,3 MB of additional disk space will be used. Do you want to continue [Y/n]; y (Reading database ... 437051 files and directories currently installed.) Unpacking linux-headers-3.2.0-33-generic-pae (from .../linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb) ... dpkg: error processing /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb (--unpack): unable to create `/usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h.dpkg-new' (while processing `./usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h'): No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried all suggestions I could find: sudo apt-get clean sudo apt-get autoclean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install sudo apt-get install --fix-broken Then I saw that on the error there was a mention about free space. So I did a df -h and the result was: Filesystem Size Used Avail Use% Mounted on /dev/sda1 7,0G 5,5G 1,1G 84% / udev 235M 4,0K 235M 1% /dev tmpfs 97M 816K 96M 1% /run none 5,0M 0 5,0M 0% /run/lock none 242M 352K 242M 1% /run/shm I see that on my root folder I have 1.1Gb free. The broken package is linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb which only takes up 11.3Mb on my hard drive. I'm soooo lost. I really hope there is something I'm missing here. I don't want to go about reformatting this bucket. It's really not worth the time. Any help for fixing this would be hot.

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 (U+1D11E) MUSICAL SYMBOL G CLEF 𝕥 (U+1D565) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 (U+1D7F6) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 (U+2008A) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • Best practice while marking a bug as resolved with Bugzilla (versioning of product and components)

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • How i can fix " E: Internal Error, No file name for libc6 "

    - by SMAOUH
    Hello all please i need your help to fix this problem i have 2 broken packages system and i can't reinstall them or make any other option : update , upgrade , install & remove app .... Ubuntu 12.04.3 I have not found any solutions please help me sudo apt-get install -f smaouh@Linux:~$ sudo apt-get install -f [sudo] password for smaouh: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libopenal1 libpam-winbind libao-common gnome-exe-thumbnailer libqca2-plugin-ossl gir1.2-champlain-0.12 libmagickcore4 libmagickwand4 libmagickcore4-extra libcapi20-3 python-unidecode libopenal-data liblqr-1-0 gir1.2-gtkchamplain-0.12 unixodbc wine-gecko2.21 libchamplain-0.12-0 python-glade2 imagemagick-common libosmesa6 oss-compat gimp-help-common esound-common gimp-help-en libmpg123-0 ttf-mscorefonts-installer imagemagick winbind libodbc1 fonts-droid fonts-unfonts-core libchamplain-gtk-0.12-0 libclutter-gtk-1.0-0 gir1.2-gtkclutter-1.0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 386 not upgraded. 4 not fully installed or removed. After this operation, 0B of additional disk space will be used. dpkg: error processing libc6 (--configure): libc6:amd64 2.15-0ubuntu10.5 cannot be configured because libc6:i386 is in a different version (2.15-0ubuntu10.4) dpkg: dependency problems prevent configuration of libc-dev-bin: libc-dev-bin depends on libc6 (>> 2.15); however: Package libc6 is not configured yet. libc-dev-bin depends on libc6 (<< 2.16); however: Package libc6 is not configured yet. dpkg: error processing libc-dev-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-dev: libc6-dev depends on libc6 (= 2.15-0ubuntu10.5); however: Package libc6 is not configured yet. libc6-dev depends on libc-dev-bin (= 2.15-0ubuntu10.5); however: Package libc-dev-bin is not configured yet. dpkg: error processing libc6-dev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-i386: libc6-i386 depends on libc6 (= 2.15-0ubuntu10.5); however: Package libc6 is not configured yet. dpkg: error processing libc6-i386 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already Errors were encountered while processing: libc6 libc-dev-bin libc6-dev libc6-i386 E: Sub-process /usr/bin/dpkg returned an error code (1) smaouh@Linux:~$

    Read the article

  • LDoms and Maintenance Mode

    - by Owen Allen
     I got a few questions about how maintenance mode works with LDoms. "I have a Control Domain that I need to do maintenance on. What does being put in maintenance mode actually do for a Control Domain?" Maintenance mode is what you use when you're going to be shutting a system down, or otherwise tinkering with it, and you don't want Ops Center to generate incidents and notification of incidents. Maintenance mode stops new incidents from being generated, but it doesn't stop polling, or monitoring, the system and it doesn't prevent alerts. "What does maintenance mode do with the guests on a Control Domain?" If you have auto recovery set and the Control Domain is a member of a server pool of eligible systems, putting the Control Domain in maintenance mode automatically migrates guests to an available Control Domain.  When a Control Domain is in maintenance mode, it is not eligible to receive guests and the placement policies for guest creation and for automatic recovery won't select this server as a possible destination. If there isn't a server pool or there aren't any eligible systems in the pool, the guests are shut down. You can select a logical domain from the Assets section to view the Dashboard for the virtual machine and the Automatic Recovery status, either Enabled or Disabled. To change the status, click the action in the Actions pane. "If I have to do maintenance on a system and I do not want to initiate auto-recovery, what do I have to do so that I can manually bring down the Control Domain (and all its Guest domains)?" Use the Disable Automatic Recovery action. "If I put a Control Domain into maintenance mode, does that also put the OS into maintenance mode?" No, just the Control Domain server. You have to put the OS into maintenance mode separately. "Also, is there an easy way to see what assets are in maintenance mode? Can we put assets into, or take them out of, maintenance mode on some sort of group level?" You can create a user-defined group that will automatically include assets in maintenance mode. The docs here explain how to set up these groups. You'll use a group rule that looks like this:

    Read the article

  • Ubuntu Sluggish and Graphics Problem after Nvidia Driver Update

    - by iam
    I just recently started using Ubuntu (12.04) since a few weeks ago and noticed that the interface is very slow and sluggish: On Dash, I have to type the entire app name and wait a few seconds before it shows up in the search box, and a bit later before it displays search result Opening new files or applications takes also quite long and awkward Dragging icons or moving app windows around is not very spontaneous too: I have to take extra attention in moving the mouse otherwise Ubuntu would not do a correct movement or might ends up doing something incorrect instead e.g. opening the windows to full screen options or move the file to different folders, which is frustrating My PC is a few years old already (1.7 GB RAM) so this could be a reason too but when I checked in System Monitor it's hardly ever consuming much memory. Plus web-surfing on Firefox is actually lightning fast (much more than Windows), so I suspect there might be something wrong with the graphics driver (mine is GeForce 7050). I checked around System Settings and found an option to update the Nvidia driver. So I tried it and restarted, as instructed. Now, I got into a big problem upon restart... as the login-screen windows (where I have to type in the password) would take several attempts to display and finally did not manage to (it'd freeze for several seconds before there's any movement again). The background screen also kept reloading several times too and at some point the screen turned black with pixelated color strips running on the bottom 1/3 of the screen, and after a long while the background screen would come up again. Eventually I'd manage to be able to access the desktop but the launcher, top menu bar and app windows border would not disappear. I searched around and found many other people have this similar problem after updating Nvidia driver too, and on some threads the suggestion is to use "killall -u $USER" in command line (it's the only thing among various online suggestions I could do, as at that point I could not access Terminal without the launcher - Ctrl-Alt-T doesn't work for me). So I did that and was able to access the desktop correctly again with launchee/menu by creating a new account. But I would still have the same problem if logging into my original account. So I just finally tried upgrading to 12.10 and now can access my original account with fully-functional desktop - the launcher, menu and windows border are all back now. However, the problem with sluggishness still remains. And now I get scared of ever having to update the Nvidia driver again! I wonder if anyone knows what's the reason that updating the Nvidia driver is causing this problem and is there a way I can update it safely in the future? I'm still not sure how to solve the problem with the sluggishness too and not sure where else to look to find a solution.

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Do we need to adopt a black-box asset our project is inheriting from its predecessor?

    - by Tom Anderson
    Our client has an eCommerce site which was developed by an in-house team, and is now showing its age. I work for a firm brought in as external contractors to build a replacement. Part of the current site is a Flash viewer applet which displays media about the product - zoom-able images, 360-degree views, movies, and so on. We need to show the same media the current site does, so we are simply reusing the viewer. The viewer is embedded on a page in the usual way, and told what media to show by means of an XML file it loads from our server, which is pretty simple for us to generate. We've got this working; it was pretty straightforward. But what else do we need to do? The thing is, as far as we're concerned, the viewer is a binary blob which is served from the client's content-distribution network. We embed it, feed it some XML, and it does its job, but we have no power over its internals. It's completely opaque to us - a black box. We can use it to do what it does, but we can't change it, so if we ever need to do something different, we're stuffed. We're building this site for the client, and when we're done, we'll hand it over for them to maintain. We won't be doing the maintenance ourselves. There's a small team within the client who are working as part of our team, and who will be the ones doing the maintenance. That team only includes one person from the team that built the old site, and it's not someone who knows the image viewer. The people who do know the image viewer are not slated to join our team when our system replaces theirs - they'll be moved to other projects. The documentation on the viewer is extremely thin, and as far as i know doesn't cover the internals at all. My worry is that if someone doesn't take some positive action, all knowledge of the internal workings of the viewer - even down to where the source code for it is - will be lost. It's possible it already has been. Is this something to worry about? If so, whose job is it to worry about it? What should they do about it once they've got worried?

    Read the article

  • Software center is broken and can not be repaired or reinstalled

    - by Michal
    When I open the software center, I am told that I can not use it, for it is broken, and needs to be repaired. First I try to do this automatically, as I was offered. I enter a root password, and then the installation fails. installArchives() failed: reconfiguring packages... reconfiguring packages... reconfiguring packages... reconfiguring packages... (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 275048 files and directories currently installed.) Unpacking wine1.4-i386 (from .../wine1.4-i386_1.4-0ubuntu4.1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/wine1.4-i386_1.4-0ubuntu4.1_i386.deb (--unpack): trying to overwrite '/usr/bin/wine', which is also in package wine1.5 1.5.5-0ubuntu1~ppa1~oneiric1+pulse17 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/wine1.4-i386_1.4-0ubuntu4.1_i386.deb Error in function: dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4.1); however: Package wine1.4 is not installed. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured What should I do now? First of all, I've tried reinstalling the center, but it failed due to the same 1.4 dependency as is laid out here. I've googled for help and although I don't understand linux at all, I've tried some suggestions: I've tried editing the dpkg status in /var/lib/dpkg/status which failed because the file could not be found. I've tried purging wine/* but that had unresolved dependencies as well. It's a giant mess.

    Read the article

  • atkbd.c spamming the logs. How to get rid? what is this?

    - by turbo
    On my Vostro 1000 notebook the following messages spam my dmesg: [18678.728936] atkbd.c: Unknown key released (translated set 2, code 0x8d on isa0060/serio0). [18678.728941] atkbd.c: Use 'setkeycodes e00d <keycode>' to make it known. [18679.831109] atkbd.c: Unknown key pressed (translated set 2, code 0x8d on isa0060/serio0). [18679.831119] atkbd.c: Use 'setkeycodes e00d <keycode>' to make it known. [18679.841607] atkbd.c: Unknown key released (translated set 2, code 0x8d on isa0060/serio0). [18679.841615] atkbd.c: Use 'setkeycodes e00d <keycode>' to make it known. [18680.901733] atkbd.c: Unknown key pressed (translated set 2, code 0x8d on isa0060/serio0). [18680.901744] atkbd.c: Use 'setkeycodes e00d <keycode>' to make it known. [18680.911536] atkbd.c: Unknown key released (translated set 2, code 0x8d on isa0060/serio0). [18680.911546] atkbd.c: Use 'setkeycodes e00d <keycode>' to make it known. It's most probably not from an actual key because it appears in regular intervals. First what is it? It could be my battery since it's nearly dead, as in loadable to 11 % of the initial capacity, but I have no evidence for that. What is this / how can I find out where this comes from? How can I get rid of it? Is there a 'dud' keycode? When I assign a keycode with sudo setkeycode e00d $(random keycode) the key does actually get pressed. That makes it impossible to enter sudo password for example. So any 'real' keycode is not an option. It hasn't been like that half a year ago. Even better than the dud keycode would be a real fix. It happens from 10.04 to 12.04 (before that I don't know). I did read zcat /usr/share/doc/udev/README.keymap.txt.gz |less as suggested in the Ubuntu Wiki. /lib/udev/findkeyboards && sudo /lib/udev/keymap -i input/event5 produces what appears to be newlines in rapid succession. sudo udevadm monitor doesn't show the event.

    Read the article

  • Media server...serving files...............for a limited time only

    - by Craig
    I’m new to Ubuntu and am seeking help with a media server I have built. I have a couple of HTPCs in my house running XBMC. I wanted to build one for the family room working double duty as a HTPC, and media server to share movies, TV shows, music, etc. on my Windows network. So using some spare old parts I had lying around I decided to go CRAZY and build my first Linux box. I used Ubuntu because it seemed to be the most user friendly variant, especially for people that are new to Linux. I had to do a few things to get the media files shared properly on my network: Made sure my two media drives auto-mount every time I boot the computer by editing the “fstab” file – “sudo nano /etc/fstab” Installed Samba - “sudo apt-get install samba” Set a password for Samba - “sudo smbpasswd –a USERNAME” Edited the Samba configuration file to make sure the computer was in my networks workgroup – “sudo nano /etc/samba/smb.conf” In the file manager (not sure if that’s the right name for it), I right-clicked my media folders and set the sharing and permissions. The sharing was done without guest access, and permissions were set to; Owner, Group, and Others - Access: Create and Delete Files. Adjusted the Power Management settings to never put the system into sleep mode. I checked to see if I had access to the files from a Windows 7 machine and I did (Woo Hoo!). But when I tried to play any of my video files from the Windows machine (using VLC media player), they would only play for about 2-5 minutes and then they would stop with an error message saying that the file could not be accessed (Booo...). I tried playing some files through XBMC running in Windows and they worked for a bit longer (about 10-15mins), but they also stopped playing. I installed the Linux version of XBMC on the server and played the files locally with no problems. It doesn’t seem to be an issue with the files themselves, it seems to be a sharing problem on my network. So my question to the Ubuntu gurus out there is: Did I miss adding/editing something in the Samba configuration file? Did I use the right method to share my media files (file manager vs. using the terminal)? Is it possible for the computer to still go to sleep without the screen going black (does that even make sense?). Are there any special settings in Ubuntu that I should be using since this computer as a media server (is there a media server mode?...!...?). Any help on this matter would be greatly appreciated. Thanks

    Read the article

  • links for 2011-02-21

    - by Bob Rhubart
    Calling all enterprise architects | Enterprise architecture - InfoWorld Nominations are now open for the 2011 InfoWorld Enterprise Architecture Award, honoring companies whose enterprise architecture initiatives made a difference (tags: ping.fm) Red Tape, Part II : OTN Garage "How do you back up all of that storage? Tape: really fast tape. And, lots of it. This creates a whole variety of very interesting challenges today, elevating the topic to – at the very least – glamorous, but I think it qualifies as being downright hot!" - Kemer Thomson (tags: oracle entarch datastorage) The Buttso Blathers: Using Secure Config Files with the WebLogic Maven Plugin "WebLogic Server has long had a mechanism to provide a more secure way of connecting to the Administration Server from client utilities such that the username and password do not need to be specified and therefore can’t be seen from the process list or command shell history." (tags: oracle weblogic) World-class EA | Open Group Blog "World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value." - Mick Adams (tags: enterprisearchitecture entarch opengroup) Enterprise Process Maps: A Process Picture worth a Million Words (Telecommunications Architecture Corner) "Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment..." - Raul Goycoolea (tags: oracle otn telecommunications businessprocess entarch bpm) Andrejus Baranovskis's Blog: WebCenter PS3 Customization Manager- Long Awaited Feature for MDS Oracle ACE Director Andrejus Baranovski shares "really great news for those of you who are working on MDS personalization and customization support in Oracle Fusion Middleware applications." (tags: oracle otn oracleace webcenter enterprise2.0) Oracle WebCenter: Common User Experience Architecture (Oracle Enterprise 2.0 Blog) Kellsey Ruppel describes "how the new release of Oracle WebCenter delivers a Common User Experience Architecture." (tags: oracle otn webcenter enterprise2.0) Java / Oracle SOA blog: Do your SOA deployments & configuration with AIA Oracle ACE Edwin Biemond illustrates the use of the SOA Suite / FMW deployment framework, "one of the Application Integration Architecture (AIA) hidden gems." (tags: oracle oracleace soa otn fusionmiddleware) Enterprise Software Development with Java: Clustering Stateful Session Beans with GlassFish 3.1 Oracle ACE Director Markus Eisele describes what he did "to get a Stateful Session Bean failover scenario working with two instances on one node." (tags: oracle otn oracleace glassfish) Enhanced REST Support in Oracle Service Bus 11gR1 (SOA Thinker) Jeff Davies illustrates how to re-implement the REST-ful Products services using query strings for passing parameter information. (tags: oracle otn soa REST)

    Read the article

  • CAM v2.0 ships – all new foundation version

    - by drrwebber
    The latest release of the CAM editor toolset is now available on Sourceforge.net – search NIEM. In this all new version the support from Oracle has enabled a transformation of the editor underpinning Java framework and results in 3x performance improvement and 50% better memory utilization. The result of nearly six months of improvements are catalogued in the release notes. http://sourceforge.net/projects/camprocessor/files/CAM%20Editor/Releases/2.0/CAM_Editor_2-0_Release_Notes.pdf/download However here I’d like to talk about the strategic vision and highlight specific new go to features that make a difference for exchange schema designers and with a focus on the NIEM community. So why is this a foundation version? Basically the new drag and drop designer tool allows you to tailor your own dictionary collection of components and then simply select and position those into your resulting exchange structure. This is true global reuse enabled from a canonical domain dictionary collection. So instead of grappling with XSD Schema syntax, or UML model nuances – this is straightforward direct WYSIWYG visual engineering – using familiar sets of business components. Then the toolkit writes the complex XSD Schema for you, along with test samples, documentation, XMI/UML models, Mindmaps and more. So how do you get a set of business components? The toolkit allows you to harvest these from existing schema collections or enterprise data models, or as in the case of NIEM, existing domain dictionary collections. I’ve been using this for the latest IEEE/OASIS/NIST initiative on a Common Data Format (CDF) for elections management systems. So you can download those from OASIS and see how this can transform how you build actual business exchanges – improving the quality, consistency and usability – and dramatically allowing automated generation of artifacts you only dreamed of before – such as a model of your entire major exchange collection components. http://www.oasis-open.org/committees/documents.php?wg_abbrev=election So what we have here is a foundation version – setting the scene and the basis for changing how people can generate and manage information exchanges. A foundation built using the OASIS CAM standard combined with aspects of the NIEM Naming and Design Rules and the UN/CEFACT Core Components specifications and emerging work on OASIS CIQ name and address and ANSI/ISO code list schema. We still have a raft of work to do to integrate this into SOA best practices and extend the dictionary capabilities to assist true community development. Answering questions such as: - How good is my canonical component collection? - How much reuse is really occurring? - What inconsistencies and extensions are there in the dictionary components? Expect us to begin tackling these areas now that the foundation is in place. The immediate need is to develop training and self-start materials – so we will be focusing there for the next couple of months and especially leading up to the IJIS industry event in July in New Jersey, and the NIEM NTE event in August in Philadelphia. http://sourceforge.net/projects/camprocessor

    Read the article

  • Data Generator Source Adapter

    This component needs little explanation. It generates random integer (DT_I4) and string (DT_WSTR) data and places them in the pipeline. You specify how many columns of each you would like and for any string columns you pass a fixed length value. You then need to specify how many rows in total you require to be generated. This component is used by us to do testing of the pipeline and components downstream. Previously we would have used a script component (as a source) to generate the rows but found ourselves rewriting the code too often so created this component. Screenshots SQL Server 2005 Integration Services SQL Server 2008/2012 Integration Services The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Data Generator Source from the list. Downloads The Data Generator Source Adapter is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Data Generator Source Adapter for SQL Server 2005 Data Generator Source Adapter for SQL Server 2008 Data Generator Source Adapter for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.30 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.29 - SQL Server 2008 February 2008 CTP. Includes support for upgrade of 2005 packages. Simplified user interface. (4 Mar 2008) Version 2.0.0.27 - SQL Server 2008 November 2007 CTP. String columns will now use the default system code page. Previously string columns always used 1252. (15 Feb 2008) SQL Server 2005 Version 1.1.0.23 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.0 - SQL Server 2005 IDW 16 Sept CTP. Public release. (6 Oct 2005)

    Read the article

  • TFS Hosting: discountasp.net TFS

    - by Enrique Lima
    In the last month or so I have been able to test and experience first hand the offering from discountasp.net for hosted TFS 2010. This first part is a description of the setup process for the account itself and getting some additional information on what you will find through the portal on their site. Not long ago, I posted a little tidbit on hosting TFS.  Through it I also did a shameless plug to my employer, our services and the type of hosting we recommend.  So, wouldn’t me running on discountasp.net be an issue?  Actually? NO. Ok, enough rambling.  Let’s get some details here. It is a Software as a Service model.  Through it we get Source Control, Version Control, Work Item Tracking and such.  What about Build?  If your need includes Build Management and such, you may need to look at some other options.  But, still this is a great offering for those that are moving from SourceSafe.  Or organizations who have 3 to 5 developers on staff, and do not foresee getting larger anytime soon.  Can it support more than 5 developers?  Yes, but then we need to get into how are you using TFS.  Do you need more than just Basic?  For example, SharePoint and Reporting Services integration. The signup process was seamless! Very easy to follow, complete and transition to Visual Studio to start working. An email followed the signup process, it contained details on how to get to the Team Foundation Server Control Panel login.  Once there, here is what I saw after the initial setup process of naming my Team Project Collection: So, moving on … once I clicked the area to get my server info, I got the following: Then it was a matter of getting the first user in there: Then on to connecting Visual Studio to my hosted TFS. Getting the server information, and the user account created I will configure those options in Visual Studio. Using Team Explorer, I am adding a new server configuration. Once this is provided, click OK, I will be challenged for a username and password, provide them and you will land on the following screen. Then Click Close. You will now be connected to your server and Team Project Collection. Since this will likely be the first time connecting, you will have no Projects (I already have 2 going). Click Connect, and you will be back in Team Explorer. My next post in the topic will be on Creating your First Team Project and uploading a Project Template to the server.

    Read the article

  • Windows Phone 7 Development Updates &ndash; March 8th 2011

    - by Nikita Polyakov
    Here are the latest update from the Windows Phone 7 Developer Worlds that went live this month. Here are some of the latest numbers: Windows Phone Marketplace currently offers more than 9,000 quality apps and games and enjoys a base of over 32,000 registered developers, delivering an average of 100 new apps every day. There have been over 1 million downloads of the developers tools for Windows Phone 7. Trial version help you sell more Trials result in higher sales by the numbers: Users like trials  - paid apps with trial functionality are downloaded 70 times more than paid apps that don’t Nearly 1 out of 10 trial apps downloaded convert to a purchase and generate 10 times more revenue on average than paid apps that don’t include trial functionality. Trial downloads convert to paid downloads quickly. More than half of trial downloads that convert to a sale do so within the 1st 24 hours of trial download, and mostly within 2 hours of trial download. Microsoft Ad Control is gaining traction By the numbers - ad supported Windows Phone 7 apps are: Roughly ¼ of all registered U.S. WP7 developers have downloaded the free Ad SDK for Silverlight and XNA Of ad funded apps, over 95 percent use the free Microsoft Advertising Ad Control Monthly impressions from our Ad Exchange has continued to grow by double digits – impressions increased by 376 percent since January Ad Control, the first wave of “How Do I” videos are now available on MSDN: Create an Ad in a Windows Phone 7 XNA Game App Register Ad-Enabled Windows Phone 7 Apps Measure Ad Performance of Windows Phone 7 Apps Boarder International App submission for Free Apps through Yalla Apps As of today you can start submitting your free applications in developer markets that are currently not covered by Microsoft. To submit your Free application if you DO NOT belong to one of the Marketplace supported countries, go to: Yalla Apps Marketplace Policy Updates: Free App Marketplace Submission upped to 100 and other news Microsoft has been revisiting a few of our Marketplace policies based on feedback from developers to reduce friction and cost, word for word: 1. We have raised the limit on the number of certifications that can be performed for FREE apps at no cost to the registered developer from five to 100. This was a common request from developers which we are glad to implement after building alternate methods to ensure that users can find and download high quality apps. 2. We have converted policy 5.6 - related to the inclusion of contact information for support - from a mandatory to an optional policy. This is still a strongly recommended best practice, but we recognized and responded to developer feedback that this policy was creating excessive drag on the certification process for developers without commensurate user benefit for all apps. 3. We also understand the desire for clarification with regard to our policy on applications distributed under open source licenses.  The Marketplace Application Provider Agreement (APA) already permits applications under the BSD, MIT, Apache Software License 2.0 and Microsoft Public License.  We plan to update the APA shortly to clarify that we also permit applications under the Eclipse Public License, the Mozilla Public License and other, similar licenses and we continue to explore the possibility of accommodating additional OSS licenses. Enjoy and happy coding! Official Blog Post for reference.

    Read the article

  • How can I dynamically load the correct sprite from a sprite sheet?

    - by Leonard Challis
    I am making a simple card game in unity. The game is based on a standard 52-card pack, with identical backs for unique faces. In my particular game different cards are worth different values and have various special abilities. The game will have 52 cards on the table (on the draw position or in the face-down deck or in someone's hand) at all times, so this number won't change. I thought that making a Card prefab and instantiating 52 of these manually would be a bad idea. Even doing it in code, I thought, would be a bit OTT, and that I should just instantiate visual cards when they are face-up to the player. I have a sprite sheet of the 52 cards and the back, which is imported as a Sprite in multiple mode, sliced in to a grid containing all the cards needed to play the game. The problem I now face is that, through my GameController script I want to generate a shuffled pack of cards, deal some to each player and then show those cards to a player. However, I am not sure of the best way, or even if it's possible, to do this dynamically with the sprite sheets as they are. For instance if I have the following: private CardRank rank; private CardSuite suite; private void Start() { this.rank = CardRank.Ace; this.suite = CardSuite.Spades; } This class would be instantiated by the game manager. I would have 52 of these in code. Whenever I have to visually show a card in the scene, I would use a card prefab, which is essentially a game object with a SpriteRenderer on it. I would need to dynamically load the correct sprite for this object from the spritesheet. The sliced sprites from the sprite sheet actually have names in the format AS (Ace of Spades), 7H (Seven of Hearts), etc - though this was a manual thing I did myself of course. I have also tried various alternative solutions, including creating animations, having separate sprites not in a spritesheet and having an array of available sprites in an array with a specific index for each card, but none seem as elegant as trying to load the correct sprite at runtime, as I'm trying to. So, how do I load a specific sprite from a spritesheet at runtime? I'm open to suggestions, even those that make me think differently about how to approach the problem.

    Read the article

  • Error installing RVM

    - by Dbugger
    I am following this guide, but this is the output I receive. What am the problem? dbugger@mercury:~$ \curl -sSL https://get.rvm.io | bash -s stable --rails Downloading https://github.com/wayneeseguin/rvm/archive/stable.tar.gz Upgrading the RVM installation in /home/dbugger/.rvm/ RVM PATH line found in /home/dbugger/.profile /home/dbugger/.bashrc /home/dbugger/.zshrc. RVM sourcing line found in /home/dbugger/.bash_profile /home/dbugger/.zlogin. Upgrade of RVM in /home/dbugger/.rvm/ is complete. # Enrique, # # Thank you for using RVM! # We sincerely hope that RVM helps to make your life easier and more enjoyable!!! # # ~Wayne, Michal & team. In case of problems: http://rvm.io/help and https://twitter.com/rvm_io Upgrade Notes: * No new notes to display. rvm 1.25.27 (stable) by Wayne E. Seguin <[email protected]>, Michal Papis <[email protected]> [https://rvm.io/] Searching for binary rubies, this might take some time. No binary rubies available for: ubuntu/14.04/x86_64/ruby-2.1.2. Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies. Checking requirements for ubuntu. Installing requirements for ubuntu. Updating system.......... Installing required packages: gawk, libreadline6-dev, libssl-dev, libyaml-dev, libsqlite3-dev, sqlite3.... Error running 'requirements_debian_libs_install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3', showing last 15 lines of /home/dbugger/.rvm/log/1401804140_ruby-2.1.2/package_install_gawk_libreadline6-dev_libssl-dev_libyaml-dev_libsqlite3-dev_sqlite3.log ++ /scripts/functions/utility : __rvm_try_sudo() 405 > sudo -p '%p password required for '\''apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3'\'': ' apt-get --no-install-recommends --yes install gawk libreadline6-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libssl-dev : Depends: libssl1.0.0 (= 1.0.1f-1ubuntu2) but 1.0.1f-1ubuntu2.1 is to be installed E: Unable to correct problems, you have held broken packages. ++ /scripts/functions/utility : __rvm_try_sudo() 405 > return 100 ++ /scripts/functions/requirements/ubuntu : requirements_debian_libs_install() 36 > return 100 Requirements installation failed with status: 100.

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >