Search Results

Search found 765 results on 31 pages for 'mr shoubs'.

Page 12/31 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Career development as a Software Developer without becoming a manager.

    - by albertpascual
    I’m a developer, I like to write new exciting code everyday, my perfect day at work is a day that when I wake up, I know that I have to write some code that I haven’t done before or to use a new framework/language/platform that is unknown to me. The best days in the office is when a project is waiting for me to architect or write. In my 15 years in the development field, I had to in order to get a better salary to manage people, not just to lead developers, to actually manage people. Something that I found out when I get into a management position is that I’m not that good at managing people, and not afraid to say it. I do not enjoy that part of the job, the worse one, takes time away from what I really like. Leading developers and managing people are very different things. I do like teaching and leading developers in a project. Yet most people believe, and is true in most companies, the way to get a better salary is to be promoted to a manager position. In order to advance in your career you need to let go of the everyday writing code and become a supervisor or manager. This is the path for developers after they become senior developers. As you get older and your family grows, the only way to hit your salary requirements is to advance your career to become a manager and get that manager salary. That path is the common in most companies, the most intelligent companies out there, have learned that promoting good developers mean getting a crappy manager and losing a good resource. Now scratch everything I said, because as I previously stated, I don’t see myself going to the office everyday and just managing people until is time to go home. I like to spend hours working in some code to accomplish a task, learning new platforms and languages or patterns to existing languages. Being interrupted every 15 minutes by emails or people stopping by my office to resolve their problems, is not something I could enjoy. All the sudden riding my motorcycle to work one cold morning over the Redlands Canyon and listening to .NET Rocks podcast, Michael “Doc” Norton explaining how to take control of your development career without necessary going to the manager’s track. I know, I should not have headphones under my helmet when riding a motorcycle in California. His conversation with Carl Franklin and Richard Campbell was just confirming everything I have ever did with actually more details and assuring that there are other paths. His method was simple yet most of us, already do many of those steps, Mr. Michael “Doc” Norton believe that it pays off on the long run, that finally companies prefer to pay higher salaries to those developers, yet I would actually think that many companies do not see developers that way, this is not true for bigger companies. However I do believe the value of those developers increase and most of the time, changing companies could increase their salary instead of staying in the same one. In short without even trying to get into the shadow of Mr. Norton and without following the steps in the order; you should love to learn new technologies, and then teach them to other geeks. I personally have learn many technologies and I haven’t stop doing that, I am a professor at UCR where I teach ASP.NET and Silverlight. Mr Norton continues that after than, you want to be involve in the development community, user groups, online forums, open source projects. I personally talk to user groups, I’m very active in forums asking and answering questions as well as for those I got awarded the Microsoft MVP for ASP.NET. After you accomplish all those, you should also expose yourself for what you know and what you do not know, learning a new language will make you humble again as well as extremely happy. There is no better feeling that learning a new language or pattern in your daily job. If you love your job everyday and what you do, I really recommend you to follow Michael’s presentation that he kindly share it on the link below. His confirmation is a refreshing, knowing that my future is not behind a desk where the computer screen is on my right hand side instead of in front of me. Where I don’t have to spent the days filling up performance forms for people and the new platforms that I haven’t been using yet are just at my fingertips. Presentation here. http://www.slideshare.net/LeanDog/take-control-of-your-development-career-michael-doc-norton?from=share_email_logout3 Take Control of Your Development Career Welcome! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] Recovering Post Technical I love to learn I love to teach I love to work in teams I love to write code I really love to write code What about YOU? Do you love your job? Do you love your Employer? Do you love your Boss? What do you love? What do you really love? Take Control Take Control • Get Noticed • Get Together • Get Your Mojo • Get Naked • Get Schooled Get Noticed Get Noticed Know Your Business Get Noticed Get Noticed Understand Management Get Noticed Get Noticed Do Your Existing Job Get Noticed Get Noticed Make Yourself Expendable Get Together Get Together Join a User Group Get Together Help Run a User Group Get Together Start a User Group Get Your Mojo Get Your Mojo Kata Get Your Mojo Koans Get Your Mojo Breakable Toys Get Your Mojo Open Source Get Naked Get Naked Run with Group A Get Naked Do Something Different Get Naked Own Your Mistakes Get Naked Admit You Don’t Know Get Schooled Get Schooled Choose a Mentor Get Schooled Attend Conferences Get Schooled Teach a New Subject Get Started Read These (Again) Take Control of Your Development Career Thank You! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] In a short summary, I recommend any developer to check his blog and more important his presentation, I haven’t been lucky enough to watch him live, I’m looking forward the day I have the opportunity. He is giving us hope in the future of developers, when I see some of my geek friends moving to position that in short years they begin to regret, I get more unsure of my future doing what I love. I would say that now is looking at the spectrum of companies that understand and appreciate developers. There are a few there, hopefully with time code sweat shops will start disappearing and being a developer will feed a family of 4. Cheers Al tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/07/career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • Bajaj Electical Lites up India with Oracle SCM Apps!

    - by [email protected]
    The Executive Team from Bajaj Electricals/India did a fantastic job of summarizing their Oracle Experience in a 5 minute clip below. Their Chairman, Mr Baja, along with Ram/GM and several other business leaders highlighted the decision to select Oracle and Oracle consulting to connect thier business applications to meet the needs of a growing country.  Its a clip worth listening to and provides keen insight into both the vision and results. http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=8226860

    Read the article

  • Gparted Partition Mount Points Alternate Between 2 Physical Disk Drives

    - by California Ken
    I'm running Ubuntu Server 14.04 on a system with 2 physical disk drives. I am frequently seeing mount errors on startup. When I check the drive partitions using GPARTED, I see that my two "non-system created" data partitions have the wrong disk assignments (i.e. sda1 vs sdb1) or visa-versa. If I hand edit /etc/fstab to match GPARTED, the system will boot error free one time. On the second restart I will get the "serious mount problem" error for the 2 data partitions and when I check GPARTED, the disk assignments have changed again (again, GPARTED and fstab don't match). A listing of my /etc/fstab is: /etc/fstab: static file system information. # Use 'blkid' to print the universally unique identifier for a device; this may be used with UUID= as a more robust way to name devices that works even if disks are added and removed. See fstab(5). # / was on /dev/sdb2 during installation UUID=766a06a4-e5af-484a-adf0-fa1e88da7212 / ext4 errors=remount-ro,user_xattr,acl,barrier=1 0 1 swap was on /dev/sda6 during installation UUID=8c42f835-ead3-43fb-88d8-196f5dfc3aa7 none swap sw 0 0 swap was on /dev/sdb3 during installation UUID=2214deec-ba98-47da-aea7-4e46998f3e57 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sda1 /media/ken/Linux-Data ext3 defaults 0 2 /dev/sda5 /media/ken/Data2 ext4 defaults 0 2 The device designations in the last 2 lines are the ones in question. The fstab entries to NOT change between system restarts but the mount points in the GPARTED display do. Does anyone have a fix for this? Thanks Mr. Young and Mr Gedak, Following is my fstab file and two blkid outputs. The fstab output is correct. The first blkid output was after a reboot and is WRONG! The sda and sdb device partition data is reversed. The 2nd blkid output was after a second reboot (fstab not changed). It shows the sda and adb partition data CORRECTLY. I didn't see any duplicate UUIDs. Does anyone have any idea why the GPARTED and blkid outputs alternate on consecutive reboots? The alternating partition data is real since when the partition assignments are reversed, the boot sequence halts with disk mounting errers (I have to press "s" to skip the mounts). Thanks again. Ken I copied the contents of a text file showing my fstab and 2 blkid outputs. The text file contents show up in the text entry box but does not appear in the main body of the question. Is there a way I can attach the text file or edit this question so that the text is displayed for question viewers?

    Read the article

  • Review the New Migration Guide to SQL Server 2012 Always On

    - by KKline
    I had the pleasure of meeting Mr. Cephas Lin, of Microsoft, last year at the SQL Saturday in Indianapolis and then later at the PASS Summit in the fall. Cephas has been writing content for SQL Server 2012 Always On. Cephas has recently published his first whitepaper, a migration guide to SQL Server AlwaysOn. Read it and then pass along any feedback: HERE Enjoy, -Kev - Follow me on Twitter !...(read more)

    Read the article

  • links for 2011-03-08

    - by Bob Rhubart
    The Empowered Business "Someone needs to be the enterprise parent that asks the question, “do you really need that?” It may be a shiny new thing, but does it make a difference in the ability to accomplish the strategy and goals?" - Enterprise Architect Todd Biske (tags: enterprisearchitecture) Knowledge Workers in the British Raj "While we’ve used technology to change business, business has also evolved to the point that it’s changing how we think about and use technology." - Peter Evans Greenwood (tags: enterprisearchitecture enterprise2.0) Arun Gupta, Miles to go ...: OTN Developer Day Boston 2011 - Slides & Trip Report Arun Gupta shares slides from his Developer Day presentations. (tags: oracle otn java) Use WLST to Delete All JMS Messages From a Destination (James Bayer's Blog) James Bayer responds to a question. (tags: oracle otn weblogic jms) Triangle Circle Square: Apex in the Amazon Cloud Scott Wesley shares several links to resources covering Oracle Apex on an Amazon EC2 instance. (tags: oracle apex ec2 amazon cloud) William Vambenepe: Reading IBM's proposed standard for Cloud Architecture The always entertaining William Vambenepe gives IBM's proposed Cloud standards the full Ebert. (tags: oracle cloud ibm standards) Government Information Group Cloud Computing Research Study "The twin pressures of reduced budgets and the need for greater efficiency have led the federal government to strongly promote cloud computing as a solution whenever possible." (tags: cloudcomputing cloud) The Ron Batra Blog: Technology Whispers: Top 10 Reasons to go ExaData "Continuing my exploration of ExaData, I thought I'd take a minute to consolidate my thoughts into key reasons for which Oracle ExaData could be a good fit for your needs." - Oracle ACE Director Ron Batra (tags: oracle oracleace exadata) Oracle WebCenter: Composite Applications & Mash-Ups (Oracle Enterprise 2.0 Blog) "The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups." (tags: oracle webcenter enterprise2.0) Antonio Romero: Great Discussion of ETL and ELT Tooling in TDWI Linkedin Group Antonio says: "There’s a great discussion of ETL and ELT tooling going on in the official TDWI Linkedin group, under the heading 'How Sustainable is SQL for ETL?' It delves into a wide range of topics." (tags: oracle linkedin etl elt) YouTube - Bunny Inc. - Episode 1. Mr. CIO meets Mr. Executive Manager Yes, it's a commercial. But it's well done and it's funny. (tags: e20 enterprise2.0 webcenter) Markus Eisele: Both Weblogic and Glassfish are strategic products for Oracle Oracle ACE Director Markus Eisele shares selected quotes pulled from the recent TechCast Live interview with Oracle's Anil Gaur and Adam Leftik (tags: oracle java weblogic glassfish) How to become an Oracle SOA expert? (SOA Partner Community Blog) Jurgan Kress shares info and links for those interested in capitalizing on SOA. (tags: oracle soa)

    Read the article

  • Industrial Manufacturing Industry Trends and Challenges

    Mr. Lorne Jones, Senior Director of Oracle's Global Manufacturing Strategy and Marketing practice, talks with Fred about the challenges industrial manufacturing companies face, and Oracle's expertise in supporting their Lean enterprise initiatives with Oracle enterprise solutions.

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • Review the New Migration Guide to SQL Server 2012 Always On

    - by KKline
    I had the pleasure of meeting Mr. Cephas Lin, of Microsoft, last year at the SQL Saturday in Indianapolis and then later at the PASS Summit in the fall. Cephas has been writing content for SQL Server 2012 Always On. Cephas has recently published his first whitepaper, a migration guide to SQL Server AlwaysOn. Read it and then pass along any feedback: HERE Enjoy, -Kev - Follow me on Twitter !...(read more)

    Read the article

  • how to get rid of light desktop manager entry in ubuntu greeter

    - by user205162
    Got "light desktop manager" entry in ubuntu greeter for some strange reason and can't get rid of it. As the current OS is Ubunu 13.10 the way to configure the lightdm greeter is rather strange, can't find any solution in greeter configs. More annoying thing is that Mr.LDM seems to have password very hard to guess so I can't login to this "account". Can you help me to find source of this problem and find the solution?

    Read the article

  • Streaming Audio over UDP to Android

    - by Mr. Pig
    Is it possible to have Android (perhaps via MediaPlayer or a different existing class) accept media streams over UDP? I've successfully had MediaPlayer connect to an HTTP stream (as well as static files hosted on an HTTP server) but I'm wondering how one would go about accepting a stream from a UDP source. I've seen this and suppose a solution similar to that (where I download the stream via an independent UDP socket and then move the data to a MemoryBuffer that I then pass to MediaPlayer) is an option but I'm curious if a method already exists in the SDK, and if it does not, what other options do I have? Thanks

    Read the article

  • Asp.net MVC ModelState.Clear

    - by Mr Grok
    Can anyone give me a succinct definition of the role of ModelState in Asp.net MVC (or a link to one). In particular I need to know in what situations it is necessary or desirable to call ModelState.Clear(). Can anyone give me a succinct definition of the role of ModelState in Asp.net MVC (or a link to one). In particular I need to know in what situations it is necessary or desirable to call ModelState.Clear(). Bit open ended huh... sorry, I think it might help if tell you what I'm acutally doing: I have an Action of Edit on a Controller called "Page". When I first see the form to change the Page's details everything loads up fine (binding to a "MyCmsPage" object). Then I click a button that generates a value for one of the MyCmsPage object's fields (MyCmsPage.SeoTitle). It generates fine and updates the object and I then return the action result with the newly modified page object and expect the relevant textbox (rendered using <%= Html.TextBox("seoTitle", page.SeoTitle)%) to be updated ... but alas it displays the value from the old model that was loaded. I've worked around it by using ModelState.Clear() but I need to know why / how it has worked so I'm not just doing it blindly. PageController: [AcceptVerbs("POST")] public ActionResult Edit(MyCmsPage page, string submitButton) { // add the seoTitle to the current page object page.GenerateSeoTitle(); // why must I do this? ModelState.Clear(); // return the modified page object return View(page); } Aspx: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MyCmsPage>" %> .... <div class="c"> <label for="seoTitle"> Seo Title</label> <%= Html.TextBox("seoTitle", page.SeoTitle)%> <input type="submit" value="Generate Seo Title" name="submitButton" /> </div>

    Read the article

  • Example WCF XML-RPC client C# code against custom XML-RPC server implementation?

    - by mr.b
    I have built my own little custom XML-RPC server, and since I'd like to keep things simple, on both server and client side (server side PHP runs, by the way), what I would like to accomplish is to create a simplest possible client in C# using WCF. Let's say that Contract for service exposed via XML-RPC is as follows. [ServiceContract] public interface IContract { [OperationContract(Action="Ping")] string Ping(); // server returns back string "Pong" [OperationContract(Action="Echo")] string Echo(string message); // server echoes back whatever message is } So, there are two example methods, one without any arguments, and another with simple string argument, both returning strings (just for sake of example). Service is exposed via http. What's next? Thanks for reading! P.S. I have done my homework of googling around for samples and similar, but all that I could come up with are some blog-related samples that use existing (and very big) classes, which implement correct IContract (or IBlogger) interfaces, so that most of what I am interested is hidden below several layers of abstraction...

    Read the article

  • Java POI 3.6 XWPF usage guidelines (reading content of docx file)

    - by Mr CooL
    I assume the following objects should be used to read contents of DOCX file: XWPFDocument XWPFWordExtractor However, somewhere the compiler warns me from not including the correct libraries needed in classpath. I think I'm kinda lost for not knowing which jar file is the right one to include for this since there are so many jar files (POI libraries). My project so far involve in reading doc and docx files as part of the project. I've managed to read the contents of doc file. However, for docx file, I'm still having problem with that. Can anyone show the guidelines in terms of the codes and libraries needed (jar files) to read the content of docx file? I'm trying to limit the libraries need to be added on into project since I need to read doc and docx only. The following works for doc: fs = new POIFSFileSystem(new FileInputStream(fileName)); HWPFDocument doc = new HWPFDocument(fs); WordExtractor we = new WordExtractor(doc); String[] p = we.getParagraphText();

    Read the article

  • php variable scope in oop

    - by mr o
    Hi, I wonder if anyone can help out here, I'm trying to understand how use an objects properties across multiple non class pages,but I can't seem to be able to get my head around everything i have tried so far. For example a class called person; class person { static $name; } but i have a number of different regular pages that want to utilize $name across the board. I have trying things like this; pageone.php include "person.php"; $names = new Person(); echo person::$name; names::$name='bob'; pagetwo.php include "person.php"; echo person::$name; I can work with classes to the extent I'm OK as long as I am creating new instances every page, but how can make the properties of one object available to all, like a shared variable ? Thanks

    Read the article

  • How to consume PHP SOAP service using WCF

    - by mr.b
    I am new in web services so apologize me if I am making some cardinal mistake here, hehe. I have built SOAP service using PHP. Service is SOAP 1.2 compatible, and I have WSDL available. I have enabled sessions, so that I can track login status, etc. I don't need some super security here (ie message-level security), all I need is transport security (HTTPS), since this service will be used infrequently, and performances are not so much of an issue. I am having difficulties making it to work at all. C# throws some unclear exception ("Server returned an invalid SOAP Fault. Please see InnerException for more details.", which in turn says "Unbound prefix used in qualified name 'rpc:ProcedureNotPresent'."), but consuming service using PHP SOAP client behaves as expected (including session and all). So far, I have following code. note: due to amount of real code, I am posting minimal code configuration PHP SOAP server (using Zend Soap Server library), including class(es) exposed via service: <?php class Verification_LiteralDocumentProxy { protected $instance; public function __call($methodName, $args) { if ($this->instance === null) { $this->instance = new Verification(); } $result = call_user_func_array(array($this->instance, $methodName), $args[0]); return array($methodName.'Result' => $result); } } class Verification { private $guid = ''; private $hwid = ''; /** * Initialize connection * * @param string GUID * @param string HWID * @return bool */ public function Initialize($guid, $hwid) { $this->guid = $guid; $this->hwid = $hwid; return true; } /** * Closes session * * @return void */ public function Close() { // if session is working, $this->hwid and $this->guid // should contain non-empty values } } // start up session stuff $sess = Session::instance(); require_once 'Zend/Soap/Server.php'; $server = new Zend_Soap_Server('https://www.somesite.com/api?wsdl'); $server->setClass('Verification_LiteralDocumentProxy'); $server->setPersistence(SOAP_PERSISTENCE_SESSION); $server->handle(); WSDL: <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:tns="https://www.somesite.com/api" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap-enc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" name="Verification" targetNamespace="https://www.somesite.com/api"> <types> <xsd:schema targetNamespace="https://www.somesite.com/api"> <xsd:element name="Initialize"> <xsd:complexType> <xsd:sequence> <xsd:element name="guid" type="xsd:string"/> <xsd:element name="hwid" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="InitializeResponse"> <xsd:complexType> <xsd:sequence> <xsd:element name="InitializeResult" type="xsd:boolean"/> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="Close"> <xsd:complexType/> </xsd:element> </xsd:schema> </types> <portType name="VerificationPort"> <operation name="Initialize"> <documentation> Initializes connection with server</documentation> <input message="tns:InitializeIn"/> <output message="tns:InitializeOut"/> </operation> <operation name="Close"> <documentation> Closes session between client and server</documentation> <input message="tns:CloseIn"/> </operation> </portType> <binding name="VerificationBinding" type="tns:VerificationPort"> <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="Initialize"> <soap:operation soapAction="https://www.somesite.com/api#Initialize"/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> <operation name="Close"> <soap:operation soapAction="https://www.somesite.com/api#Close"/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="VerificationService"> <port name="VerificationPort" binding="tns:VerificationBinding"> <soap:address location="https://www.somesite.com/api"/> </port> </service> <message name="InitializeIn"> <part name="parameters" element="tns:Initialize"/> </message> <message name="InitializeOut"> <part name="parameters" element="tns:InitializeResponse"/> </message> <message name="CloseIn"> <part name="parameters" element="tns:Close"/> </message> </definitions> And finally, WCF C# consumer code: [ServiceContract(SessionMode = SessionMode.Required)] public interface IVerification { [OperationContract(Action = "Initialize", IsInitiating = true)] bool Initialize(string guid, string hwid); [OperationContract(Action = "Close", IsInitiating = false, IsTerminating = true)] void Close(); } class Program { static void Main(string[] args) { WSHttpBinding whb = new WSHttpBinding(SecurityMode.Transport, true); ChannelFactory<IVerification> cf = new ChannelFactory<IVerification>( whb, "https://www.somesite.com/api"); IVerification client = cf.CreateChannel(); Console.WriteLine(client.Initialize("123451515", "15498518").ToString()); client.Close(); } } Any ideas? What am I doing wrong here? Thanks!

    Read the article

  • How to get BinarySecurityToken into the wcf soap request

    - by Mr Bell
    I need to sign my soap request to a 3rd party. The provided an example what the call should look like. And I am trying, rather unsuccessfully to make this call with wcf. I need to make a wcf soap call where the header contains BinarySecurityToken, Signature, and SecurityTokenReference. Here is the example they sent me (with some of the values omitted) I have a certificate for signing, but I cant for the life of me figure out how to make this work <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:BinarySecurityToken EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" wsu:Id="SecurityToken-..omitted.." xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">..omitted..</wsse:BinarySecurityToken> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#Body"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>..omitted...</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> ..omitted.. </ds:SignatureValue> <ds:KeyInfo><wsse:SecurityTokenReference><wsse:Reference URI="#SecurityToken-..omitted.." ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"/></wsse:SecurityTokenReference></ds:KeyInfo></ds:Signature></wsse:Security></soapenv:Header><soapenv:Body wsu:Id="Body"><in0 xmlns="http://test.3rdParty.com">123</in0></soapenv:Body></soapenv:Envelope>

    Read the article

  • How do i specify wcf behaviorExtension class type without the assembly version number?

    - by Mr Bell
    I have a web app that uses a WCF service that utilizes a behaviorExtension like so: <behaviorExtensions> <add name="clientCredentialsExtension" type="Simon.Web.Giftcard.WCFSecurity.ClientCredentialsExtensionElement, Simon.Web.Giftcard, Version=1.0.3736.20411, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> The problem is this web app's version changes with every compile (i think) and thus invalidating this entry. How can I avoid having to change the version number every time I compile this? Can I specify the extension in code somewhere?

    Read the article

  • Tutorial: Simple WCF XML-RPC client

    - by mr.b
    Update: I have provided complete code example in answer below. I have built my own little custom XML-RPC server, and since I'd like to keep things simple, on both server and client side, what I would like to accomplish is to create a simplest possible client (in C# preferably) using WCF. Let's say that Contract for service exposed via XML-RPC is as follows: [ServiceContract] public interface IContract { [OperationContract(Action="Ping")] string Ping(); // server returns back string "Pong" [OperationContract(Action="Echo")] string Echo(string message); // server echoes back whatever message is } So, there are two example methods, one without any arguments, and another with simple string argument, both returning strings (just for sake of example). Service is exposed via http. Aaand, what's next? :)

    Read the article

  • Simple iPhone drawing app with Quartz 2D

    - by Mr guy 4
    I am making a simple iPhone drawing program as a personal side-project. I capture touches event in a subclassed UIView and render the actual stuff to a seperate CGLayer. After each render, I call [self setNeedsLayout] and in the drawRect: method I draw the CGLayer to the screen context. This all works great and performs decently for drawing rectangles. However, I just want a simple "freehand" mode like a lot of other iPhone applications have. The way I thought to do this was to create a CGMutablePath, and simply: CGMutablePathRef path; -(void)touchBegan { path = CGMutablePathCreate(); } -(void)touchMoved { CGPathMoveToPoint(path,NULL,x,y); CGPathAddLineToPoint(path,NULL,x,y); } -(void)drawRect:(CGContextRef)context { CGContextBeginPath(context); CGContextAddPath(context,path); CGContextStrokePath(context); } However, after drawing for more than 1 second, performance degrades miserably. I would just draw each line into the off-screen CGLayer, if it were not for variable opacity! The less-than-100% opacity causes dots to be left on the screen connecting the lines. I have looked at CGContextSetBlendingMode() but alas I cannot find an answer. Can anyone point me in the right direction? Other iPhone apps are able to do this with very good efficiency.

    Read the article

  • Linq 2 SQL One to Zero or One relationship possible?

    - by Mr. Flibble
    Is it possible to create a one to zero or one relationship in Linq2SQL? My understanding is that to create a one to one relationship you create a FK relationship on the PK of each table. But you cannot make the PK nullable, so I don't see how to make a one to zero or one relationship work? I'm using the designer to automatically create the model - so I would like to know how to set up the SQL tables to induce the relationship - not some custom ORM code.

    Read the article

  • Using Repository and Unit of Work patterns with Entity Framework 4.0 and MVC 2

    - by Mr. D
    Hi, I'm following this article Using Repository and Unit of Work patterns with Entity Framework 4.0. I'm tying to implement the Repository and Unit of work pattern, using Asp.Net MVC 2 and Entity Framework 4. Please let me know if I'm doing it right... In the Models folder: Northwind.edmx Products.cs (POCO class) ProductRepository.cs (Did my product query) IProductRepository.cs NorthwindContext.cs IUnitOfWork.cs In the Controller folder: ProductController.cs (Retrieve from ProductRepository.cs and Pass it to the view) When I run the application, I'm getting error message: Mapping and metadata information could not be found for EntityType 'NorthwindMvcPoco.Models.Category'. I don't know what I'm doing wrong. I search through whole web and I couldn't resolve this issue. Please help me.

    Read the article

  • Android edtftpj/PRo SFTP heap worker problem

    - by Mr. Kakakuwa Bird
    Hi I am using edtftpj-pro3.1 trial copy in my android app to make SFTP connection with the server. After few connections with the server with 5-6 file transfers, my app is crashing with following exception. Is it causing the problem or what could be the problem?? I tried setParallelMode(false) in SSHFTPClient, but it is not working. Exception i'm getting is, 05-31 18:28:12.661: ERROR/dalvikvm(589): HeapWorker is wedged: 10173ms spent inside Lcom/enterprisedt/net/j2ssh/sftp/SftpFileInputStream;.finalize()V 05-31 18:28:12.661: INFO/dalvikvm(589): DALVIK THREADS: 05-31 18:28:12.661: INFO/dalvikvm(589): "main" prio=5 tid=3 WAIT 05-31 18:28:12.661: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x4001b260 self=0xbd18 05-31 18:28:12.661: INFO/dalvikvm(589): | sysTid=589 nice=0 sched=0/0 cgrp=default handle=-1343993192 05-31 18:28:12.661: INFO/dalvikvm(589): at java.lang.Object.wait(Native Method) 05-31 18:28:12.661: INFO/dalvikvm(589): - waiting on <0x122d70 (a android.os.MessageQueue) 05-31 18:28:12.661: INFO/dalvikvm(589): at java.lang.Object.wait(Object.java:288) 05-31 18:28:12.661: INFO/dalvikvm(589): at android.os.MessageQueue.next(MessageQueue.java:148) 05-31 18:28:12.661: INFO/dalvikvm(589): at android.os.Looper.loop(Looper.java:110) 05-31 18:28:12.661: INFO/dalvikvm(589): at android.app.ActivityThread.main(ActivityThread.java:4363) 05-31 18:28:12.661: INFO/dalvikvm(589): at java.lang.reflect.Method.invokeNative(Native Method) 05-31 18:28:12.661: INFO/dalvikvm(589): at java.lang.reflect.Method.invoke(Method.java:521) 05-31 18:28:12.661: INFO/dalvikvm(589): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 05-31 18:28:12.661: INFO/dalvikvm(589): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 05-31 18:28:12.661: INFO/dalvikvm(589): at dalvik.system.NativeStart.main(Native Method) 05-31 18:28:12.671: INFO/dalvikvm(589): "Transport protocol 1" daemon prio=5 tid=29 NATIVE 05-31 18:28:12.671: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x44774768 self=0x3a7938 05-31 18:28:12.671: INFO/dalvikvm(589): | sysTid=605 nice=0 sched=0/0 cgrp=default handle=3834600 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.platform.OSNetworkSystem.receiveStreamImpl(Native Method) 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.platform.OSNetworkSystem.receiveStream(OSNetworkSystem.java:478) 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.net.PlainSocketImpl.read(PlainSocketImpl.java:565) 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.net.SocketInputStream.read(SocketInputStream.java:87) 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.net.SocketInputStream.read(SocketInputStream.java:67) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.io.BufferedInputStream.fillbuf(BufferedInputStream.java:157) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.io.BufferedInputStream.read(BufferedInputStream.java:346) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.io.BufferedInputStream.read(BufferedInputStream.java:341) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.A.A((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.A.B((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.TransportProtocolCommon.processMessages((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.TransportProtocolCommon.startBinaryPacketProtocol((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.TransportProtocolCommon.run((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.671: INFO/dalvikvm(589): "StreamFrameSender" prio=5 tid=27 TIMED_WAIT 05-31 18:28:12.671: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x44750a60 self=0x3964d8 05-31 18:28:12.671: INFO/dalvikvm(589): | sysTid=603 nice=0 sched=0/0 cgrp=default handle=3761648 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Object.wait(Native Method) 05-31 18:28:12.671: INFO/dalvikvm(589): - waiting on <0x399478 (a com.corventis.gateway.ppp.StreamFrameSender) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Object.wait(Object.java:326) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.corventis.gateway.ppp.StreamFrameSender.run(StreamFrameSender.java:154) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.corventis.gateway.util.MonitoredRunnable.run(MonitoredRunnable.java:41) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.671: INFO/dalvikvm(589): "SftpActiveWorker" prio=5 tid=25 TIMED_WAIT 05-31 18:28:12.671: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x447522b0 self=0x398e00 05-31 18:28:12.671: INFO/dalvikvm(589): | sysTid=604 nice=0 sched=0/0 cgrp=default handle=3762704 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Object.wait(Native Method) 05-31 18:28:12.671: INFO/dalvikvm(589): - waiting on <0x3962d8 (a com.corventis.gateway.hostcommunicator.SftpActiveWorker) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Object.wait(Object.java:326) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.corventis.gateway.hostcommunicator.SftpActiveWorker.run(SftpActiveWorker.java:151) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.corventis.gateway.util.MonitoredRunnable.run(MonitoredRunnable.java:41) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.671: INFO/dalvikvm(589): "Thread-12" prio=5 tid=23 NATIVE 05-31 18:28:12.671: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x4474aca8 self=0x115690 05-31 18:28:12.671: INFO/dalvikvm(589): | sysTid=602 nice=0 sched=0/0 cgrp=default handle=878120 05-31 18:28:12.671: INFO/dalvikvm(589): at android.bluetooth.BluetoothSocket.acceptNative(Native Method) 05-31 18:28:12.681: INFO/dalvikvm(589): at android.bluetooth.BluetoothSocket.accept(BluetoothSocket.java:287) 05-31 18:28:12.681: INFO/dalvikvm(589): at android.bluetooth.BluetoothServerSocket.accept(BluetoothServerSocket.java:105) 05-31 18:28:12.681: INFO/dalvikvm(589): at android.bluetooth.BluetoothServerSocket.accept(BluetoothServerSocket.java:91) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.bluetooth.BluetoothManager.openPort(BluetoothManager.java:215) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.bluetooth.BluetoothManager.open(BluetoothManager.java:84) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.patchcommunicator.PatchCommunicator.open(PatchCommunicator.java:123) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.patchcommunicator.PatchCommunicatorRunnable.run(PatchCommunicatorRunnable.java:134) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.681: INFO/dalvikvm(589): "HfGatewayApplication" prio=5 tid=21 RUNNABLE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=0 dsCount=0 s=N obj=0x4472d9b0 self=0x120928 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=601 nice=0 sched=0/0 cgrp=default handle=1264672 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.Deflate.deflateInit2(Deflate.java:~1361) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.Deflate.deflateInit(Deflate.java:1316) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.ZStream.deflateInit(ZStream.java:127) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.ZStream.deflateInit(ZStream.java:120) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.ZOutputStream.(ZOutputStream.java:62) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.zipfile.ZipStorer.addStream(ZipStorer.java:211) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.zipfile.ZipStorer.createZip(ZipStorer.java:127) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hostcommunicator.HostCommunicator.scanAndCompress(HostCommunicator.java:453) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hostcommunicator.HostCommunicator.doWork(HostCommunicator.java:1434) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hf.HfGatewayApplication.doWork(HfGatewayApplication.java:621) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hf.HfGatewayApplication.run(HfGatewayApplication.java:546) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.util.MonitoredRunnable.run(MonitoredRunnable.java:41) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.681: INFO/dalvikvm(589): "Thread-10" prio=5 tid=19 TIMED_WAIT 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x447287f8 self=0x1451b8 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=598 nice=0 sched=0/0 cgrp=default handle=1331920 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.VMThread.sleep(Native Method) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.sleep(Thread.java:1306) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.sleep(Thread.java:1286) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.util.Watchdog.run(Watchdog.java:167) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.681: INFO/dalvikvm(589): "Thread-9" prio=5 tid=17 RUNNABLE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=Y obj=0x44722c90 self=0x114e20 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=597 nice=0 sched=0/0 cgrp=default handle=1200048 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.time.Time.currentTimeMillis(Time.java:~77) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.patchcommunicator.PatchCommunicatorState$1.run(PatchCommunicatorState.java:27) 05-31 18:28:12.681: INFO/dalvikvm(589): "Thread-8" prio=5 tid=15 RUNNABLE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=Y obj=0x44722430 self=0x124dd0 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=596 nice=0 sched=0/0 cgrp=default handle=1199848 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.time.Time.currentTimeMillis(Time.java:~80) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hostcommunicator.HostCommunicatorState$1.run(HostCommunicatorState.java:35) 05-31 18:28:12.681: INFO/dalvikvm(589): "Binder Thread #2" prio=5 tid=13 NATIVE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x4471ccc0 self=0x149b60 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=595 nice=0 sched=0/0 cgrp=default handle=1317992 05-31 18:28:12.681: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.681: INFO/dalvikvm(589): "Binder Thread #1" prio=5 tid=11 NATIVE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x447159a8 self=0x123298 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=594 nice=0 sched=0/0 cgrp=default handle=1164896 05-31 18:28:12.681: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.691: INFO/dalvikvm(589): "JDWP" daemon prio=5 tid=9 VMWAIT 05-31 18:28:12.691: INFO/dalvikvm(589): | group="system" sCount=1 dsCount=0 s=N obj=0x4470f2a0 self=0x141a90 05-31 18:28:12.691: INFO/dalvikvm(589): | sysTid=593 nice=0 sched=0/0 cgrp=default handle=1316864 05-31 18:28:12.691: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.691: INFO/dalvikvm(589): "Signal Catcher" daemon prio=5 tid=7 VMWAIT 05-31 18:28:12.691: INFO/dalvikvm(589): | group="system" sCount=1 dsCount=0 s=N obj=0x4470f1e8 self=0x124970 05-31 18:28:12.691: INFO/dalvikvm(589): | sysTid=592 nice=0 sched=0/0 cgrp=default handle=1316800 05-31 18:28:12.691: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.691: INFO/dalvikvm(589): "HeapWorker" daemon prio=5 tid=5 MONITOR 05-31 18:28:12.691: INFO/dalvikvm(589): | group="system" sCount=1 dsCount=0 s=N obj=0x431b4550 self=0x141670 05-31 18:28:12.691: INFO/dalvikvm(589): | sysTid=591 nice=0 sched=0/0 cgrp=default handle=1316400 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpSubsystemClient.closeHandle((null):~-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpSubsystemClient.closeFile((null):-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpFile.close((null):-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpFileInputStream.close((null):-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpFileInputStream.finalize((null):-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.691: ERROR/dalvikvm(589): VM aborting 05-31 18:28:12.801: INFO/DEBUG(49): * ** * ** * ** * ** * ** * 05-31 18:28:12.801: INFO/DEBUG(49): Build fingerprint: 'google/passion/passion/mahimahi:2.1-update1/ERE27/24178:user/release-keys' 05-31 18:28:12.801: INFO/DEBUG(49): pid: 589, tid: 601 com.corventis.gateway.hf <<< 05-31 18:28:12.801: INFO/DEBUG(49): signal 11 (SIGSEGV), fault addr deadd00d 05-31 18:28:12.801: INFO/DEBUG(49): r0 00000026 r1 afe13329 r2 afe13329 r3 00000000 05-31 18:28:12.801: INFO/DEBUG(49): r4 ad081f50 r5 400091e8 r6 009b3a6a r7 00000000 05-31 18:28:12.801: INFO/DEBUG(49): r8 000002e8 r9 ad082ba0 10 ad082ba0 fp 00000000 05-31 18:28:12.801: INFO/DEBUG(49): ip deadd00d sp 46937c58 lr afe14373 pc ad035b4c cpsr 20000030 05-31 18:28:12.851: INFO/DEBUG(49): #00 pc 00035b4c /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #01 pc 00044d7c /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #02 pc 000162e4 /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #03 pc 00016b60 /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #04 pc 00016ce0 /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #05 pc 00057b64 /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #06 pc 00057cc0 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #07 pc 00057dd4 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #08 pc 00012ffc /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #09 pc 00019338 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #10 pc 00018804 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #11 pc 0004eed0 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #12 pc 0004eef8 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #13 pc 000426d4 /system/lib/libdvm.so 05-31 18:28:12.881: INFO/DEBUG(49): #14 pc 0000fd74 /system/lib/libc.so 05-31 18:28:12.881: INFO/DEBUG(49): #15 pc 0000f840 /system/lib/libc.so 05-31 18:28:12.881: INFO/DEBUG(49): code around pc: 05-31 18:28:12.881: INFO/DEBUG(49): ad035b3c 58234808 b1036b9b f8df4798 2026c01c 05-31 18:28:12.881: INFO/DEBUG(49): ad035b4c 0000f88c ef52f7d8 0004c428 fffe631c 05-31 18:28:12.881: INFO/DEBUG(49): ad035b5c fffe94f4 000002f8 deadd00d f8dfb40e 05-31 18:28:12.881: INFO/DEBUG(49): code around lr: 05-31 18:28:12.881: INFO/DEBUG(49): afe14360 686768a5 f9b5e008 b120000c 46289201 05-31 18:28:12.881: INFO/DEBUG(49): afe14370 9a014790 35544306 37fff117 6824d5f3 05-31 18:28:12.881: INFO/DEBUG(49): afe14380 d1ed2c00 bdfe4630 00026ab0 000000b4 05-31 18:28:12.881: INFO/DEBUG(49): stack: 05-31 18:28:12.881: INFO/DEBUG(49): 46937c18 00000015 05-31 18:28:12.881: INFO/DEBUG(49): 46937c1c afe13359 /system/lib/libc.so 05-31 18:28:12.881: INFO/DEBUG(49): 46937c20 afe3b02c /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c24 afe3afd8 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c28 00000000 05-31 18:28:12.891: INFO/DEBUG(49): 46937c2c afe14373 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c30 afe13329 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c34 afe13329 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c38 afe13380 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c3c ad081f50 /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c40 400091e8 /dev/ashmem/mspace/dalvik-heap/zygote/0 (deleted) 05-31 18:28:12.891: INFO/DEBUG(49): 46937c44 009b3a6a 05-31 18:28:12.891: INFO/DEBUG(49): 46937c48 00000000 05-31 18:28:12.891: INFO/DEBUG(49): 46937c4c afe1338d /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c50 df002777 05-31 18:28:12.891: INFO/DEBUG(49): 46937c54 e3a070ad 05-31 18:28:12.891: INFO/DEBUG(49): #00 46937c58 ad06f573 /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c5c ad044d81 /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): #01 46937c60 000027bd 05-31 18:28:12.891: INFO/DEBUG(49): 46937c64 00000000 05-31 18:28:12.891: INFO/DEBUG(49): 46937c68 463b6ab4 /data/dalvik-cache/data@[email protected]@classes.dex 05-31 18:28:12.891: INFO/DEBUG(49): 46937c6c 463d1ecf /data/dalvik-cache/data@[email protected]@classes.dex 05-31 18:28:12.891: INFO/DEBUG(49): 46937c70 00140450 [heap] 05-31 18:28:12.891: INFO/DEBUG(49): 46937c74 ad041d2b /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c78 ad082f2c /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c7c ad06826c /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c80 00140450 [heap] 05-31 18:28:12.891: INFO/DEBUG(49): 46937c84 00000000 05-31 18:28:12.891: INFO/DEBUG(49): 46937c88 000002f8 05-31 18:28:12.891: INFO/DEBUG(49): 46937c8c 400091e8 /dev/ashmem/mspace/dalvik-heap/zygote/0 (deleted) 05-31 18:28:12.891: INFO/DEBUG(49): 46937c90 ad081f50 /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c94 000002f8 05-31 18:28:12.891: INFO/DEBUG(49): 46937c98 00002710 05-31 18:28:12.891: INFO/DEBUG(49): 46937c9c ad0162e8 /system/lib/libdvm.so Thanks & Regards,

    Read the article

  • Encrypting using RSA via COM Interop = "The requested operation requires delegation to be enabled on

    - by Mr AH
    Hi Guys, So i've got this little static method in a .Net class which takes a string, uses some stored public key and returns the encrypted version of that key. This is basically so some user entered data can be saved an encrypted, then retrieved and decrypted at a later date. Pretty basic stuff and the unit test works fine. However, part of the application is in classic ASP. This then uses some COM visible version of the class to go off and invoke the method on the real class and return the same string to the COM client (classic ASP). I use this kind of stuff all the time, but in this case we have a major problem. As the method is doing something with RSA keys and has to access certain machine information to do so, we get the error: "The requested operation requires delegation to be enabled on the machine. I've searched around a lot, but can't really understand what this means. I assume I am getting this error on the COM but not the UT because the UT runs as me (Administrator) and classic ASP as IWAM. Anyone know what I need to do to enable IWAM to do this? Or indeed if this is the real problem here?

    Read the article

  • nHibernate, Automapping and Chained Abstract Classes

    - by Mr Snuffle
    I'm having some trouble using nHibernate, automapping and a class structure using multiple chains of abstract classes It's something akin to this public abstract class AbstractClassA {} public abstract class AbstractClassB : AbstractClassA {} public class ClassA : AbstractClassB {} When I attempt to build these mappings, I receive the following error "FluentNHibernate.Cfg.FluentConfigurationException was unhandled Message: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail. Database was not configured through Database method." However, if I remove the abstract keyword from AbstractClassB, everything works fine. The problem only occurs when I have more than one abstract class in the class hierarchy. I've manually configured the automapping to include both AbstractClassA and AbstractClassB using the following binding class public class BindItemBases : IManualBinding { public void Bind(FluentNHibernate.Automapping.AutoPersistenceModel model) { model.IncludeBase<AbstractClassA>(); model.IncludeBase<AbstractClassB>(); } } I've had to do a bit of hackery to get around this, but there must be a better way to get this working. Surely nHibernate supports something like this, I just haven't figured out how to configure it right. Cheers, James

    Read the article

  • append set to another set

    - by mr.bio
    Hi Is there a better way of appending a set to another set than iterating through each element ? i have : set<string> foo ; set<string> bar ; ..... for (set<string>::const_iterator p = foo.begin( );p != foo.end( ); ++p) bar.insert(*p); Is there a more efficient way to do this ?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >