Search Results

Search found 9419 results on 377 pages for 'context sensitive grammar'.

Page 353/377 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Error in connecting Eclipse to SQL Server

    - by user3721900
    This is the syntax error Jun 10, 2014 5:15:51 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files (x86)\Java\jre7\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:/Program Files (x86)/Java/jre7/bin/client;C:/Program Files (x86)/Java/jre7/bin;C:/Program Files (x86)/Java/jre7/lib/i386;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Windows Live\Shared;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microchip\MPLAB C32 Suite\bin;C:\Program Files\Java\jdk1.7.0_25\bin;C:\Program Files (x86)\Java\jdk1.7.0_03\bin;c:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\;c:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files (x86)\Google\google_appengine\;C:\Users\Patrick\Desktop\2013-2014 2nd Sem Files\Eclipsee\eclipse;;. Jun 10, 2014 5:15:51 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:B2B' did not find a matching property. Jun 10, 2014 5:15:51 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] Jun 10, 2014 5:15:51 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] Jun 10, 2014 5:15:51 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 544 ms Jun 10, 2014 5:15:51 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Jun 10, 2014 5:15:51 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.42 Jun 10, 2014 5:15:52 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-bio-8080"] Jun 10, 2014 5:15:52 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["ajp-bio-8009"] Jun 10, 2014 5:15:52 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 374 ms com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near '`'. This is my code package b2b.fishermall; public class ConnectionString extends SqlStringCommands { public String getDriver(){ return "com.microsoft.sqlserver.jdbc.SQLServerDriver"; } public String getURL() { return "jdbc:sqlserver://localhost:1433;databaseName=B2B;integratedSecurity=true;"; } public String getUsername() { return ""; } public String getDbPassword() { return ""; } }

    Read the article

  • display image and run script for fifteen seconds

    - by Ryan Max
    This is very similar to a question I asked the other day but my page code has become significantly more complicated and I need to revisit it. I've been using the following code: $('#myLink').click(function() { $('#myImg').attr('src', 'newImg.jpg'); setTimeout(function() { $('#myImg').attr('src', 'oldImg.jpg'); }, 15000); }); To replace an image for a set period of time (15 seconds) when the link is clicked, then after 15 seconds, revert to the original image. However now, I'd like to run a snippet of javascript as well when the link is clicked (in addition to replacing the image), and only when the link is clicked (it's related to the 15 second image) and then have the js code disappear as well after the 15 seconds...however I'm not sure how to have jquery send js code into the page...Basically I just want jQuery to "echo" this code onto the page underneath the 15 second while I am there, but I don't know how jquery formats this "echo". Does this question make sense? interval = 500; imgsrc = "webcam/image.jpg"; function Refresh() { tmp = new Date(); tmp = "?" + tmp.getTime(); document.images["image1"].src = imgsrc + tmp; setTimeout("Refresh()", interval); } Refresh(); It's script for a webcam. Basically it takes a new picture every half a second and replaces it on the page. You must forgive me, I'm new to jquery, I don't know how to make this script work in a jquery context. i'm sorry, I'm explaining badly. This is what I need to happen, step by step: 1) User comes to the page and there is a static image that says "Click here to view webcam" 2) User clicks image 3) Image is replaced by live webcam image, which is refreshed every .5 seconds by the second script in my question. 4) After 15 seconds the live webcam reverts back to the static image saying "click here to view webcam" It is ONLY during that 15 second interval that I wan the webcam refresh script running, otherwise it's wasting bandwidth on an element that isn't even shown. Sorry for the confusion.

    Read the article

  • Is typeid of type name always evaluated at compile time in c++ ?

    - by cyril42e
    I wanted to check that typeid is evaluated at compile time when used with a type name (ie typeid(int), typeid(std::string)...). To do so, I repeated in a loop the comparison of two typeid calls, and compiled it with optimizations enabled, in order to see if the compiler simplified the loop (by looking at the execution time which is 1us when it simplifies instead of 160ms when it does not). And I get strange results, because sometimes the compiler simplifies the code, and sometimes it does not. I use g++ (I tried different 4.x versions), and here is the program: #include <iostream> #include <typeinfo> #include <time.h> class DisplayData {}; class RobotDisplay: public DisplayData {}; class SensorDisplay: public DisplayData {}; class RobotQt {}; class SensorQt {}; timespec tp1, tp2; const int n = 1000000000; int main() { int avg = 0; clock_gettime(CLOCK_REALTIME, &tp1); for(int i = 0; i < n; ++i) { // if (typeid(RobotQt) == typeid(RobotDisplay)) // (1) compile time // if (typeid(SensorQt) == typeid(SensorDisplay)) // (2) compile time if (typeid(RobotQt) == typeid(RobotDisplay) || typeid(SensorQt) == typeid(SensorDisplay)) // (3) not compile time ???!!! avg++; else avg--; } clock_gettime(CLOCK_REALTIME, &tp2); std::cout << "time (" << avg << "): " << (tp2.tv_sec-tp1.tv_sec)*1000000000+(tp2.tv_nsec-tp1.tv_nsec) << " ns" << std::endl; } The conditions in which this problem appear are not clear, but: - if there is no inheritance involved, no problem (always compile time) - if I do only one comparison, no problem - the problem only appears only with a disjunction of comparisons if all the terms are false So is there something I didn't get with how typeid works (is it always supposed to be evaluated at compilation time when used with type names?) or may this be a gcc bug in evaluation or optimization? About the context, I tracked down the problem to this very simplified example, but my goal is to use typeid with template types (as partial function template specialization is not possible). Thanks for your help!

    Read the article

  • MVC 3 Nested EditorFor Templates

    - by Gordon Hickley
    I am working with MVC 3, Razor views and EditorFor templates. I have three simple nested models:- public class BillingMatrixViewModel { public ICollection<BillingRateRowViewModel> BillingRateRows { get; set; } public BillingMatrixViewModel() { BillingRateRows = new Collection<BillingRateRowViewModel>(); } } public class BillingRateRowViewModel { public ICollection<BillingRate> BillingRates { get; set; } public BillingRateRowViewModel() { BillingRates = new Collection<BillingRate>(); } } public class BillingRate { public int Id { get; set; } public int Rate { get; set; } } The BillingMatrixViewModel has a view:- @using System.Collections @using WIP_Data_Migration.Models.ViewModels @model WIP_Data_Migration.Models.ViewModels.BillingMatrixViewModel <table class="matrix" id="matrix"> <tbody> <tr> @Html.EditorFor(model => Model.BillingRateRows, "BillingRateRow") </tr> </tbody> </table> The BillingRateRow has an Editor Template called BillingRateRow:- @using System.Collections @model IEnumerable<WIP_Data_Migration.Models.ViewModels.BillingRateRowViewModel> @foreach (var item in Model) { <tr> <td> @item.BillingRates.First().LabourClass.Name </td> @Html.EditorFor(m => item.BillingRates) </tr> } The BillingRate has an Editor Template:- @model WIP_Data_Migration.Models.BillingRate <td> @Html.TextBoxFor(model => model.Rate, new {style = "width: 20px"}) </td> The markup produced for each input is:- <input name="BillingMatrix.BillingRateRows.item.BillingRates[0].Rate" id="BillingMatrix_BillingRateRows_item_BillingRates_0__Rate" style="width: 20px;" type="text" value="0"/> Notice the name and ID attributes the BillingRate indexes are handled nicely but the BillingRateRows has no index instead '.item.'. From my reasearch this is because the context has been pulled out due to the foreach loop, the loop shouldn't be necessary. I want to achieve:- <input name="BillingMatrix.BillingRateRows[0].BillingRates[0].Rate" id="BillingMatrix_BillingRateRows_0_BillingRates_0__Rate" style="width: 20px;" type="text" value="0"/> If I change the BillingRateRow View to:- @model WIP_Data_Migration.Models.ViewModels.BillingRateRowViewModel <tr> @Html.EditorFor(m => Model.BillingRates) </tr> It will throw an InvalidOperationException, 'model item passed into the dictionary is of type System.Collections.ObjectModel.Collection [BillingRateRowViewModel] but this dictionary required a type of BillingRateRowViewModel. Can anyone shed any light on this?

    Read the article

  • Is this a problem typically solved with IOC?

    - by Dirk
    My current application allows users to define custom web forms through a set of admin screens. it's essentially an EAV type application. As such, I can't hard code HTML or ASP.NET markup to render a given page. Instead, the UI requests an instance of a Form object from the service layer, which in turn constructs one using a several RDMBS tables. Form contains the kind of classes you would expect to see in such a context: Form= IEnumerable<FormSections>=IEnumerable<FormFields> Here's what the service layer looks like: public class MyFormService: IFormService{ public Form OpenForm(int formId){ //construct and return a concrete implementation of Form } } Everything works splendidly (for a while). The UI is none the wiser about what sections/fields exist in a given form: It happily renders the Form object it receives into a functional ASP.NET page. A few weeks later, I get a new requirement from the business: When viewing a non-editable (i.e. read-only) versions of a form, certain field values should be merged together and other contrived/calculated fields should are added. No problem I say. Simply amend my service class so that its methods are more explicit: public class MyFormService: IFormService{ public Form OpenFormForEditing(int formId){ //construct and return a concrete implementation of Form } public Form OpenFormForViewing(int formId){ //construct and a concrete implementation of Form //apply additional transformations to the form } } Again everything works great and balance has been restored to the force. The UI continues to be agnostic as to what is in the Form, and our separation of concerns is achieved. Only a few short weeks later, however, the business puts out a new requirement: in certain scenarios, we should apply only some of the form transformations I referenced above. At this point, it feels like the "explicit method" approach has reached a dead end, unless I want to end up with an explosion of methods (OpenFormViewingScenario1, OpenFormViewingScenario2, etc). Instead, I introduce another level of indirection: public interface IFormViewCreator{ void CreateView(Form form); } public class MyFormService: IFormService{ public Form OpenFormForEditing(int formId){ //construct and return a concrete implementation of Form } public Form OpenFormForViewing(int formId, IFormViewCreator formViewCreator){ //construct a concrete implementation of Form //apply transformations to the dynamic field list return formViewCreator.CreateView(form); } } On the surface, this seems like acceptable approach and yet there is a certain smell. Namely, the UI, which had been living in ignorant bliss about the implementation details of OpenFormForViewing, must possess knowledge of and create an instance of IFormViewCreator. My questions are twofold: Is there a better way to achieve the composability I'm after? (perhaps by using an IoC container or a home rolled factory to create the concrete IFormViewCreator)? Did I fundamentally screw up the abstraction here?

    Read the article

  • Special parameters for texture binding?

    - by user146780
    Do I have to set up my gl context in a certain way to bind textures. I'm following a tutorial. I start by doing: #define checkImageWidth 64 #define checkImageHeight 64 static GLubyte checkImage[checkImageHeight][checkImageWidth][4]; static GLuint texName; void makeCheckImage(void) { int i, j, c; for (i = 0; i < checkImageHeight; i++) { for (j = 0; j < checkImageWidth; j++) { c = ((((i&0x8)==0)^((j&0x8))==0))*255; checkImage[i][j][0] = (GLubyte) c; checkImage[i][j][1] = (GLubyte) c; checkImage[i][j][2] = (GLubyte) c; checkImage[i][j][3] = (GLubyte) 255; } } } void initt(void) { glClearColor (0.0, 0.0, 0.0, 0.0); makeCheckImage(); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(1, &texName); glBindTexture(GL_TEXTURE_2D, texName); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, checkImageWidth, checkImageHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, checkImage); engineGL.current.tex = texName; } In my rendering I do: PolygonTesselator.Begin_Contour(); glEnable(GL_TEXTURE_2D); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); glBindTexture(GL_TEXTURE_2D, current.tex); if(layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size() > 0) { glColor4f( layer[currentlayer].Shapes[i].Color.r, layer[currentlayer].Shapes[i].Color.g, layer[currentlayer].Shapes[i].Color.b, layer[currentlayer].Shapes[i].Color.a); } for(unsigned int j = 0; j < layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size(); ++j) { gluTessVertex(PolygonTesselator.tobj,&layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0], &layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0]); } PolygonTesselator.End_Contour(); } glDisable(GL_TEXTURE_2D); } However it still renders the color and not the texture at all. I'd atleast expect to see black or something but its as if the bind fails. Am I missing something? Thanks

    Read the article

  • Can a process have two pid's?

    - by limp_chimp
    I'm studying computer systems and I've made this very simple function which uses fork() to create a child process. fork() returns a pid_t that is 0 if it's a child process. But calling the getpid() function within this child process returns a different, nonzero pid. In the code I have below, is newPid only meaningful in the context of the program, and not to the operating system? Is it possibly only a relative value, measured against the pid of the parent? #include <stdio.h> #include <unistd.h> #include <sys/types.h> #include <string.h> #include <errno.h> #include <stdlib.h> void unixError(char* msg) { printf("%s: %s\n", msg, strerror(errno)); exit(0); } pid_t Fork() { pid_t pid; if ((pid = fork()) < 0) unixError("Fork error"); return pid; } int main(int argc, const char * argv[]) { pid_t thisPid, parentPid, newPid; int count = 0; thisPid = getpid(); parentPid = getppid(); printf("thisPid = %d, parent pid = %d\n", thisPid, parentPid); if ((newPid = Fork()) == 0) { count++; printf("I am teh child. My pid is %d, my other pid is %d\n", getpid(), newPid); exit(0); } printf("I am the parent. My pid is %d\n", thisPid); return 0; } Output: thisPid = 30050, parent pid = 30049 I am the parent. My pid is 30050 I am teh child. My pid is 30052, my other pid is 0 Lastly, why is the child's pid 2 higher than the parent's, and not 1? The difference between the main function's pid and its parent is 1, but when we create a child it increments the pid by 2. Why is that?

    Read the article

  • Object relationships

    - by Hammerstein
    This stems from a recent couple of posts I've made on events and memory management in general. I'm making a new question as I don't think the software I'm using has anything to do with the overall problem and I'm trying to understand a little more about how to properly manage things. This is ASP.NET. I've been trying to understand the needs for Dispose/Finalize over the past few days and believe that I've got to a stage where I'm pretty happy with when I should/shouldn't implement the Dispose/Finalize. 'If I have members that implement IDisposable, put explicit calls to their dispose in my dispose method' seems to be my understanding. So, now I'm thinking maybe my understanding of object lifetimes and what holds on to what is just wrong! Rather than come up with some sample code that I think will illustrate my point, I'm going to describe as best I can actual code and see if someone can talk me through it. So, I have a repository class, in it I have a DataContext that I create when the repository is created. I implement IDisposable, and when my calling object is done, I call Dispose on my repository and explicitly call DataContext.Dispose( ). Now, one of the methods of this class creates and returns a list of objects that's handed back to my front end. Front End - Controller - Repository - Controller - Front End. (Using Redgate Memory Profiler, I take a snapshot of my software when the page is first loaded). My front end creates a controller object on page load and then makes a request to the repository sending back a list of items. When the page is finished loading, I call Dispose on the controller which in turn calls dispose on the context. In my mind, that should mean that my connection is closed and that I have no instances of my controller class. If I then refresh the page, it jumps to two 'Live' instances of the controller class. If I look at the object retention graph, the objects I created in my call to the list are being held onto ultimately by what looks like Linq. The controller/repository aside, if I create a list of objects somewhere, or I create an object and return it somewhere, am I safe to just assume that .NET will eventually come and clean things up for me or is there a best practice? The 'Live' instances suggest to me that these are still in memory and active objects, the fact that RMP apparently forces GC doesn't mean anything?

    Read the article

  • Android - dialer icon gets placed in recently used apps after finish()

    - by Donal Rafferty
    In my application I detect the out going call when a call is dialled from the dialer or contacts. This works fine and I then pop up a dialog saying I have detected the call and then the user presses a button to close the dialog which calls finish() on that activity. It all works fine except that when I then hold the home key to bring up the recently used apps the dialer icon is there. And when it is clicked the dialog is brought back into focus in the foreground when the dialog activity should be dead and gone and not be able to be brought back to the foreground. Here is a picture of what I mean. So two questions arise, why would the dialer icon be getting placed there and why would it be recalling my activity to the foreground? Here is the code for that Activity which has a dialog theme: public class CallDialogActivity extends Activity{ boolean isRecording; AudioManager audio_service; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.dialog); audio_service = (AudioManager) getSystemService(Context.AUDIO_SERVICE); getWindow().addFlags(WindowManager.LayoutParams.FLAG_BLUR_BEHIND); Bundle b = this.getIntent().getExtras(); String number = b.getString("com.networks.NUMBER"); String name = b.getString("com.networks.NAME"); TextView tv = (TextView) findViewById(R.id.voip) ; tv.setText(name); Intent service = new Intent(CallAudio.CICERO_CALL_SERVICE); startService(service); final Button stop_Call_Button = (Button) findViewById(R.id.widget35); this.setVolumeControlStream(AudioManager.STREAM_VOICE_CALL); stop_Call_Button.setOnClickListener(new View.OnClickListener(){ public void onClick(View v){ Intent service = new Intent(CallAudio._CALL_SERVICE); //this is for Android 1.5 (sets speaker going for a few seconds before shutting down) stopService(service); Intent setIntent = new Intent(Intent.ACTION_MAIN); setIntent.addCategory(Intent.CATEGORY_HOME); setIntent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(setIntent); finish(); isRecording = false; } }); final Button speaker_Button = (Button) findViewById(R.id.widget36); speaker_Button.setOnClickListener(new View.OnClickListener(){ public void onClick(View v){ if(true){ audio_service.setSpeakerphoneOn(false); } else{ audio_service.setSpeakerphoneOn(true); } } }); } @Override protected void onResume() { super.onResume(); } @Override protected void onPause() { super.onPause(); } public void onCofigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); } } It calls a service that uses AudioRecord to record from the Mic and AudioTrack to play it out the earpiece, nothing in the service to do with the dialler. Has anyone any idea why this might be happening?

    Read the article

  • PHP Shared Sessions across Domain

    - by bigstylee
    Hi, I have seen a few answers to this on SOO but most of these are concerned with the use of subdomains, of which none have worked for me. The common one being that the use of session.cookie_domain, which from my understanding will only work with subdomains. I am interested in a solution that deals with deals with entirely different domains (and includes the possibility of subdomains). Unfortunately project deadlines being what they are, time is not on my side, so I turn to SOO's expertise and experience. The current project brief is to be able to log into one site which currently only stores the user_id in the session and then be able to retrieve this value while on a different domain within the same server enviroment. Session data is being stored/retrieved from a database where the session id is the primary key. I am hoping to find a "light wieght" and "easy" to implement solution. The system is utlising an in-house Model View Controller design pattern, so all requests (including different domains) are run through a single bootstrap script. Using the domain name as a variable, this determines what context to display to the user. One option that did look like to have potential is the use of a hidden image and using the alt tag to set the user id. My first impressions suggest this immediately seems "too easy" (if possible) and riddled with security flaws. Disscuss? Another option which I considered is using the IP and User Agent for authentication but again I feel this not going to be a reliable option due to shared networks and changing IP addresses. My third option (and preferred) which I considered and as yet not seen discussed is using htaccess to fool the user into thinking that they are on a different domain when infact apache is redirecting; something like www.foo.com/index.php?domain=bar.com&controller=news/categoires/1 but displays to the user as www.bar.com/news/categories/1 foo.com represents the "main site domain" which all requests are run through and bar.com is what the user thinks they are accessing. The controller request dictates the page and view being requested. Is this possible? Are there other options? Pros/Cons? Thanks in advanced!!!

    Read the article

  • How to merge two test into one RSpec

    - by thefonso
    Both the last two test work individually...but when both are set to run (non pending) I get problems. question: can I create a test that merges the two into one? How would this look?(yes, I am new to rspec) require_relative '../spec_helper' # the universe is vast and infinite....and...it is empty describe "tic tac toe game" do context "the game class" do before (:each) do player_h = Player.new("X") player_c = Player.new("O") @game = Game.new(player_h, player_c) end it "method drawgrid must return a 3x3 game grid" do @game.drawgrid.should eq("\na #{$thegrid[:a1]}|#{$thegrid[:a2]}|#{$thegrid[:a3]} \n----------\nb #{$thegrid[:b1]}|#{$thegrid[:b2]}|#{$thegrid[:b3]} \n----------\nc #{$thegrid[:c1]}|#{$thegrid[:c2]}|#{$thegrid[:c3]} \n----------\n 1 2 3 \n") @game.drawgrid end #FIXME - last two test here - how to merge into one? it "play method must display 3x3 game grid" do STDOUT.should_receive(:puts).and_return("\na #{$thegrid[:a1]}|#{$thegrid[:a2]}|#{$thegrid[:a3]} \n----------\nb #{$thegrid[:b1]}|#{$thegrid[:b2]}|#{$thegrid[:b3]} \n----------\nc #{$thegrid[:c1]}|#{$thegrid[:c2]}|#{$thegrid[:c3]} \n----------\n 1 2 3 \n").with("computer move") @game.play end it "play method must display 3x3 game grid" do STDOUT.should_receive(:puts).with("computer move") @game.play end end end just for info here is the code containing the play method require_relative "player" # #Just a Tic Tac Toe game class class Game #create players def initialize(player_h, player_c) #bring into existence the board and the players @player_h = player_h @player_c = player_c #value hash for the grid lives here $thegrid = { :a1=>" ", :a2=>" ", :a3=>" ", :b1=>" ", :b2=>" ", :b3=>" ", :c1=>" ", :c2=>" ", :c3=>" " } #make a global var for drawgrid which is used by external player class $gamegrid = drawgrid end #display grid on console def drawgrid board = "\n" board << "a #{$thegrid[:a1]}|#{$thegrid[:a2]}|#{$thegrid[:a3]} \n" board << "----------\n" board << "b #{$thegrid[:b1]}|#{$thegrid[:b2]}|#{$thegrid[:b3]} \n" board << "----------\n" board << "c #{$thegrid[:c1]}|#{$thegrid[:c2]}|#{$thegrid[:c3]} \n" board << "----------\n" board << " 1 2 3 \n" return board end #start the game def play #draw the board puts drawgrid #external call to player class @player = @player_c.move_computer("O") end end player_h = Player.new("X") player_c = Player.new("O") game = Game.new(player_h, player_c) game.play

    Read the article

  • C++: Why does gcc prefer non-const over const when accessing operator[]?

    - by JonasW
    This question might be more appropriately asked regarding C++ in general, but as I am using gcc on linux that's the context. Consider the following program: #include <iostream> #include <map> #include <string> using namespace std; template <typename TKey, typename TValue> class Dictionary{ public: map<TKey, TValue> internal; TValue & operator[](TKey const & key) { cout << "operator[] with key " << key << " called " << endl; return internal[key]; } TValue const & operator[](TKey const & key) const { cout << "operator[] const with key " << key << " called " << endl; return internal.at(key); } }; int main(int argc, char* argv[]) { Dictionary<string, string> dict; dict["1"] = "one"; cout << "first one: " << dict["1"] << endl; return 0; } When executing the program, the output is: operator[] with key 1 called operator[] with key 1 called first one: one What I would like is to have the compiler choose the operator[]const method instead in the second call. The reason is that without having used dict["1"] before, the call to operator[] causes the internal map to create the data that does not exist, even if the only thing I wanted was to do some debugging output, which of course is a fatal application error. The behaviour I am looking for would be something like the C# index operator which has a get and a set operation and where you could throw an exception if the getter tries to access something that doesn't exist: class MyDictionary<TKey, TVal> { private Dictionary<TKey, TVal> dict = new Dictionary<TKey, TVal>(); public TVal this[TKey idx] { get { if(!dict.ContainsKey(idx)) throw KeyNotFoundException("..."); return dict[idx]; } set { dict[idx] = value; } } } Thus, I wonder why the gcc prefers the non-const call over the const call when non-const access is not required.

    Read the article

  • Efficiently fetching and storing tweets from a few hundred twitter profiles?

    - by MSpreij
    The site I'm working on needs to fetch the tweets from 150-300 people, store them locally, and then list them on the front page. The profiles sit in groups. The pages will be showing the last 20 tweets (or 21-40, etc) by date, group of profiles, single profile, search, or "subject" (which is sort of a different group.. I think..) a live, context-aware tag cloud (based on the last 300 tweets of the current search, group of profiles, or single profile shown) various statistics (group stuffs, most active, etc) which depend on the type of page shown. We're expecting a fair bit of traffic. The last, similar site peaked at nearly 40K visits per day, and ran intro trouble before I started caching pages as static files, and disabling some features (some, accidently..). This was caused mostly by the fact that a page load would also fetch the last x tweets from the 3-6 profiles which had not been updated the longest.. With this new site I can fortunately use cron to fetch tweets, so that helps. I'll also be denormalizing the db a little so it needs less joins, optimize it for faster selects instead of size. Now, main question: how do I figure out which profiles to check for new tweets in an efficient manner? Some people will be tweeting more often than others, some will tweet in bursts (this happens a lot). I want to keep the front page of the site as "current" as possible. If it comes to, say, 300 profiles, and I check 5 every minute, some tweets will only appear an hour after the fact. I can check more often (up to 20K) but want to optimize this as much as possible, both to not hit the rate limit and to not run out of resources on the local server (it hit mysql's connection limit with that other site). Question 2: since cron only "runs" once a minute, I figure I have to check multiple profiles each minute - as stated, at least 5, possibly more. To try and spread it out over that minute I could have it sleep a few seconds between batches or even single profiles. But then if it takes longer than 60 seconds altogether, the script will run into itself. Is this a problem? If so, how can I avoid that? Question 3: any other tips? Readmes? URLs?

    Read the article

  • Communicate progress from local Service

    - by kpdvx
    An application I'm building uses a local Service for downloading files from the web to the phone's SD card. In this app users can browse lists of books, and read them while online. A user can also download a pdf copy of a book for offline viewing. To handle downloads I'm using a locally bound Service. I do not want this Service to run all the time, only when downloading files. So that the Service can shut itself down when its tasks are complete, I am not binding to the service, rather I'm sending an "enqueue for download" command through the Intent passed to Context.startService. Books available for download are shown in a list. A user can choose to download a book by clicking on its row in the list. On download, I need to show download progress using a ProgressBar on the actual book list row. I need to also show, on the rows, if a book is enqueued for download, or if its download has completed or failed. The books can be shown in different activities throughout the application--in search, or in the user's list of favorite books, for example. When the books are shown in different places, these are not the same objects, but they are uniquely identified by their bookId. Because I do not want to bind to the service from every Activity, my tentative plan was to use a public static final HashMap on the Service class itself to contain a mapping of bookId to download status, an enum of enqueued, downloading, cancelled, etc. Each book view, when displayed, would check this static HashMap, and if the bookId is in the map, retrieve and display its status. I don't particularly like this idea, but at the moment it's the only way I can think of to retrieve status from the Service without having to bind to it and start it. Additionally I need to retrieve download progress percent from the Service, for a given bookId, if it is the active download. Again I'd rather not bind to the service from every activity, so I'm not sure how to go about retrieving current progress from the Service. My current plan is to use some sort of singleton mediator, that the Service will push updates to, and the views can read from. But I'm not terribly happy with this idea. The reason I'd like to avoid binding to the Service from each Activity is 1.) I'm already running another Service and 2.) binding is verbose and I'd like to avoid needing to pass around a reference to the Service (but admittedly this isn't too much of a problem). Perhaps binding to the local Service isn't expensive enough to warrant this other setup? Should I not be concerned about binding to it from each Activity? Maybe this is a non-issue?

    Read the article

  • mvc partial views loses track of the images folder when using jquery

    - by jvelez
    This is what I have and it works: $(function(){ $('.slide-out-div').tabSlideOut({ tabHandle: '.handle', //class of the element that will become your tab pathToTabImage: 'http://mhmiisdev2/images/contact_tab.gif', //path to the image for the tab //Optionally can be set using css imageHeight: '122px', //height of tab image //Optionally can be set using css imageWidth: '40px', //width of tab image //Optionally can be set using css tabLocation: 'right', //side of screen where tab lives, top, right, bottom, or left speed: 300, //speed of animation action: 'click', //options: 'click' or 'hover', action to trigger animation topPos: '200px', //position from the top/ use if tabLocation is left or right leftPos: '20px', //position from left/ use if tabLocation is bottom or top fixedPosition: true //options: true makes it stick(fixed position) on scroll }); }); This is what I want and it doesnt work when I change from one controller to another controller. NOTICE THE IMAGE PATH IS NOT ABSOLUTE $(function(){ $('.slide-out-div').tabSlideOut({ tabHandle: '.handle', //class of the element that will become your tab pathToTabImage: '/images/contact_tab.gif', //path to the image for the tab //Optionally can be set using css imageHeight: '122px', //height of tab image //Optionally can be set using css imageWidth: '40px', //width of tab image //Optionally can be set using css tabLocation: 'right', //side of screen where tab lives, top, right, bottom, or left speed: 300, //speed of animation action: 'click', //options: 'click' or 'hover', action to trigger animation topPos: '200px', //position from the top/ use if tabLocation is left or right leftPos: '20px', //position from left/ use if tabLocation is bottom or top fixedPosition: true //options: true makes it stick(fixed position) on scroll }); }); the html for completeness.... <div class="slide-out-div"> <a class="handle" href="http://link-for-non-js-users.html">Content</a> <h3>Medical Variance Reports</h3> <div> <ul> <li><a href="http://mhmssrs2/Reports/Pages/Report.aspx?" target="_blank">Individual Medicines</a></li> </ul> </div> </div> I suspect somehow pathToTabImage: /images/contact_tab.gif' loses its context when browsing throught controllers. Help me understand...

    Read the article

  • C++0x class factory with variadic templates problem

    - by randomenglishbloke
    I have a class factory where I'm using variadic templates for the c'tor parameters (code below). However, when I attempt to use it, I get compile errors; when I originally wrote it without parameters, it worked fine. Here is the class: template< class Base, typename KeyType, class... Args > class GenericFactory { public: GenericFactory(const GenericFactory&) = delete; GenericFactory &operator=(const GenericFactory&) = delete; typedef Base* (*FactFunType)(Args...); template <class Derived> static void Register(const KeyType &key, FactFunType fn) { FnList[key] = fn; } static Base* Create(const KeyType &key, Args... args) { auto iter = FnList.find(key); if (iter == FnList.end()) return 0; else return (iter->second)(args...); } static GenericFactory &Instance() { static GenericFactory gf; return gf; } private: GenericFactory() = default; typedef std::unordered_map<KeyType, FactFunType> FnMap; static FnMap FnList; }; template <class B, class D, typename KeyType, class... Args> class RegisterClass { public: RegisterClass(const KeyType &key) { GenericFactory<B, KeyType, Args...>::Instance().Register(key, FactFn); } static B *FactFn(Args... args) { return new D(args...); } }; Here is the error: when calling (e.g.) // Tucked out of the way RegisterClass<DataMap, PDColumnMap, int, void *> RC_CT_PD(0); GCC 4.5.0 gives me: In constructor 'RegisterClass<B, D, KeyType, Args>::RegisterClass(const KeyType&) [with B = DataMap, D = PDColumnMap, KeyType = int, Args = {void*}]': no matching function for call to 'GenericFactory<DataMap, int, void*>::Register(const int&, DataMap* (&)(void*))' I can't see why it won't compile and after extensive googling I couldn't find the answer. Can anyone tell me what I'm doing wrong (aside from the strange variable name, which makes sense in context)?

    Read the article

  • is this aes encryption wrapper safe ? - yet another take...

    - by user393087
    After taking into accound answers for my questions here and here I created (well may-be) improved version of my wrapper. The key issue was what if an attacker is knowing what is encoded - he might then find the key and encode another messages. So I added XOR before encryption. I also in this version prepend IV to the data as was suggested. sha256 on key is only for making sure the key is as long as needed for the aes alg, but I know that key should not be plain text but calculated with many iterations to prevent dictionary attack function aes192ctr_en($data,$key) { $iv = mcrypt_create_iv(24,MCRYPT_DEV_URANDOM); $xor = mcrypt_create_iv(24,MCRYPT_DEV_URANDOM); $key = hash_hmac('sha256',$key,$iv,true); $data = $xor.((string)$data ^ (string)str_repeat($xor,(strlen($data)/24)+1)); $data = hash('md5',$data,true).$data; return $iv.mcrypt_encrypt('rijndael-192',$key,$data,'ctr',$iv); } function aes192ctr_de($data,$key) { $iv = substr($data,0,24); $data = substr($data,24); $key = hash_hmac('sha256',$key,$iv,true); $data = mcrypt_decrypt('rijndael-192',$key,$data,'ctr',$iv); $md5 = substr($data,0,16); $data = substr($data,16); if (hash('md5',$data,true)!==$md5) return false; $xor = substr($data,0,24); $data = substr($data,24); $data = ((string)$data ^ (string)str_repeat($xor,(strlen($data)/24)+1)); return $data; } $encrypted = aes192ctr_en('secret text','password'); echo $encrypted; echo aes192ctr_de($encrypted,'password'); another question is if ctr mode is ok in this context, would it be better if I use cbc mode ? Again, by safe I mean if an attacter could guess password if he knows exact text that was encrypted and knows above method. I assume random and long password here. Maybe instead of XOR will be safer to random initial data with another run of aes or other simpler alg like TEA or trivium ?

    Read the article

  • Starting jetty with spring xml as a background process/thread

    - by compass
    My goal is to set up a jetty test server and inject a custom servlet to test some REST classes in my project. I was able to launch the server with spring xml and run tests against that server. The issue I'm having is sometimes after the server started, the process stopped at the point before running the tests. It seems jetty didn't go to background. It works every time on my computer. But when I deployed to my CI server, it doesn't work. It also doesn't work when I'm on VPN. (Strange.) The server should be completed initialized as when the tests stuck, I was able to access the server using a browser. Here is my spring context xml: .... <bean id="servletHolder" class="org.eclipse.jetty.servlet.ServletHolder"> <constructor-arg ref="courseApiServlet"/> </bean> <bean id="servletHandler" class="org.eclipse.jetty.servlet.ServletContextHandler"/> <!-- Adding the servlet holders to the handlers --> <bean id="servletHandlerSetter" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean"> <property name="targetObject" ref="servletHandler"/> <property name="targetMethod" value="addServlet"/> <property name="arguments"> <list> <ref bean="servletHolder"/> <value>/*</value> </list> </property> </bean> <bean id="httpTestServer" class="org.eclipse.jetty.server.Server" init-method="start" destroy-method="stop" depends-on="servletHandlerSetter"> <property name="connectors"> <list> <bean class="org.eclipse.jetty.server.nio.SelectChannelConnector"> <property name="port" value="#{settings['webservice.server.port']}" /> </bean> </list> </property> <property name="handler"> <ref bean="servletHandler" /> </property> </bean> Running latest Jetty 8.1.8 server and Spring 3.1.3. Any idea?

    Read the article

  • Declare Ajax-webservicecall OnSuccess method anonymous.

    - by user333113
    I write a lot of ajax javascript code and have a little design problem which I'm not totally satisfied with. A lot of times I end up with writing something like this: var typeOfPopup; function RetrievePopupContent(_typeOfPopup) { switch (_typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, DisplayPopup, OnError); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, DisplayPopup, OnError); break; } typeOfPopup = _typeOfPopup; } function DisplayPopup(result) { switch (typeOfPopup) { case Popup1: $get('Popup1').innerHTML = result; break; case Popup2: $get('Popup2').innerHTML = result; break; } Allright. This is a simplified example of what I mean. Often I end up with a lot worse code I believe. What I don't like is the global state variabel outside the functions. One solution I wasn't thinking about when writing this code is to send a context object. I believe you could write something like this: function RetrievePopupContent(typeOfPopup) { switch (typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, DisplayPopup, OnError, typeOfPopup); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, DisplayPopup, OnError, typeOfPopup); break; } } function DisplayPopup(result, typeOfPopup) { switch (typeOfPopup) { case Popup1: $get('Popup1').innerHTML = result; break; case Popup2: $get('Popup2').innerHTML = result; break; } Is this the recommended way? What I also want to do is to be able to write something like this: function RetrievePopupContent(typeOfPopup) { switch (typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, new function(result) { $get('Popup1').innerHTML = result; }, OnError); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, new function(result) { $get('Popup2').innerHTML = result; }, OnError); break; } } Is this possible at all? To declare the callback function anonymous? I am grateful for all opinions on the two options I mentioned myself and also new alternatives to get rid of my global variables I have used this way.

    Read the article

  • How to see if type is instance of a class in Haskell?

    - by Raekye
    I'm probably doing this completely wrong (the unhaskell way); I'm just learning so please let me know if there's a better way to approach this. Context: I'm writing a bunch of tree structures. I want to reuse my prettyprint function for binary trees. Not all trees can use the generic Node/Branch data type though; different trees need different extra data. So to reuse the prettyprint function I thought of creating a class different trees would be instances of: class GenericBinaryTree a where is_leaf :: a -> Bool left :: a -> a node :: a -> b right :: a -> a This way they only have to implement methods to retrieve the left, right, and current node value, and prettyprint doesn't need to know about the internal structure. Then I get down to here: prettyprint_helper :: GenericBinaryTree a => a -> [String] prettyprint_helper tree | is_leaf tree = [] | otherwise = ("{" ++ (show (node tree)) ++ "}") : (prettyprint_subtree (left tree) (right tree)) where prettyprint_subtree left right = ((pad "+- " "| ") (prettyprint_helper right)) ++ ((pad "`- " " ") (prettyprint_helper left)) pad first rest = zipWith (++) (first : repeat rest) And I get the Ambiguous type variable 'a0' in the constraint: (Show a0) arising from a use of 'show' error for (show (node tree)) Here's an example of the most basic tree data type and instance definition (my other trees have other fields but they're irrelevant to the generic prettyprint function) data Tree a = Branch (Tree a) a (Tree a) | Leaf instance GenericBinaryTree (Tree a) where is_leaf Leaf = True is_leaf _ = False left (Branch left node right) = left right (Branch left node right) = right node (Branch left node right) = node I could have defined node :: a -> [String] and deal with the stringification in each instance/type of tree, but this feels neater. In terms of prettyprint, I only need a string representation, but if I add other generic binary tree functions later I may want the actual values. So how can I write this to work whether the node value is an instance of Show or not? Or what other way should I be approaching this problem? In an object oriented language I could easily check whether a class implements something, or if an object has a method. I can't use something like prettyprint :: Show a => a -> String Because it's not the tree that needs to be showable, it's the value inside the tree (returned by function node) that needs to be showable. I also tried changing node to Show b => a -> b without luck (and a bunch of other type class/preconditions/whatever/I don't even know what I'm doing anymore).

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Asterisk SIP digest authentication username mismatch

    - by Matt
    I have an asterisk system that I'm attempting to get to work as a backup for our 3com system. We already use it for a conference bridge. Our phones are the 3com 3C10402B, so I don't have the issue of older 3com phones that come without a SIP image. The 3com phones are communicating SIP with the Asterisk, but are unable to register because they present a digest username value that doesn't match what Asterisk thinks it should. As an example, here are the relevant lines from a successful registration from a soft phone: Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="1cac3853" Phone responds: Authorization: Digest username="2321", realm="asterisk", nonce="1cac3853", uri="sip:192.168.254.12", algorithm=md5, response="d32df9ec719817282460e7c2625b6120" For the 3com phone, those same lines look like this (and fails): Server sends: WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Phone responds: Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" Basically, Asterisk wants to see a username in the Digest username field of 2321, but the 3com phone is sending sip:[email protected]. Anyone know how to tell asterisk to accept this format of username in the digest authentication? Here is the sip.conf info for that extension: [2321] deny=0.0.0.0/0.0.0.0 disallow=all type=friend secret=1234 qualify=yes port=5060 permit=0.0.0.0/0.0.0.0 nat=yes mailbox=2321@device host=dynamic dtmfmode=rfc2833 dial=SIP/2321 context=from-internal canreinvite=no callerid=device <2321 allow=ulaw, alaw call-limit=50 ... and for those interested in the grit, here is the debug output of the registration attempt: REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (11 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (no NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 401 Unauthorized Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18580 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces WWW-Authenticate: Digest algorithm=MD5, realm="asterisk", nonce="6c915c33" Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) confbridge*CLI REGISTER sip:192.168.254.12 SIP/2.0 v: SIP/2.0/UDP 192.168.254.157:5060 t: f: i: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER Max-Forwards: 70 m: ;dt=544 Expires: 3600 User-Agent: 3Com-SIP-Phone/V8.0.1.3 Authorization: Digest username="sip:[email protected]", realm="asterisk", nonce="6c915c33", uri="sip:192.168.254.12", opaque="", algorithm=MD5, response="a89df25f19e4b4598595f919dac9db81" X-3Com-PhoneInfo: firstRegistration=no; primaryCallP=192.168.254.12; secondaryCallP=0.0.0.0; --- (12 headers 0 lines) --- Using latest REGISTER request as basis request Sending to 192.168.254.157 : 5060 (NAT) SIP/2.0 100 Trying Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Contact: Content-Length: 0 SIP/2.0 403 Authentication user name does not match account name Via: SIP/2.0/UDP 192.168.254.157:5060;received=192.168.254.157 From: To: ;tag=as3fb867e2 Call-ID: fa4451d8-01d6-1cc2-13e4-00e0bb33beb9 CSeq: 18581 REGISTER User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 Scheduling destruction of SIP dialog 'fa4451d8-01d6-1cc2-13e4-00e0bb33beb9' in 32000 ms (Method: REGISTER) Thanks for your input!

    Read the article

  • iptables not allowing mysql connections to aliased ips?

    - by Curtis
    I have a fairly simple iptables firewall on a server that provides MySQL services, but iptables seems to be giving me very inconsistent results. The default policy on the script is as follows: iptables -P INPUT DROP I can then make MySQL public with the following rule: iptables -A INPUT -p tcp --dport 3306 -j ACCEPT With this rule in place, I can connect to MySQL from any source IP to any destination IP on the server without a problem. However, when I try to restrict access to just three IPs by replacing the above line with the following, I run into trouble (xxx=masked octect): iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.184 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.196 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.251 -j ACCEPT Once the above rules are in place, the following happens: I can connect to the MySQL server from the .184, .196 and .251 hosts just fine as long as am connecting to the MySQL server using it's default IP address or an IP alias in the same subnet as the default IP address. I am unable to connect to MySQL using IP aliases that are assigned to the server from a different subnet than the server's default IP when I'm coming from the .184 or .196 hosts, but .251 works just fine. From the .184 or .196 hosts, a telnet attempt just hangs... # telnet 209.xxx.xxx.22 3306 Trying 209.xxx.xxx.22... If I remove the .251 line (making .196 the last rule added), the .196 host still can not connect to MySQL using IP aliases (so it's not the order of the rules that is causing the inconsistent behavior). I know, this particular test was silly as it shouldn't matter what order these three rules are added in, but I figured someone might ask. If I switch back to the "public" rule, all hosts can connect to the MySQL server using either the default or aliased IPs (in either subnet): iptables -A INPUT -p tcp --dport 3306 -j ACCEPT The server is running in a CentOS 5.4 OpenVZ/Proxmox container (2.6.32-4-pve). And, just in case you prefer to see the problem rules in the context of the iptables script, here it is (xxx=masked octect): # Flush old rules, old custom tables /sbin/iptables --flush /sbin/iptables --delete-chain # Set default policies for all three default chains /sbin/iptables -P INPUT DROP /sbin/iptables -P FORWARD DROP /sbin/iptables -P OUTPUT ACCEPT # Enable free use of loopback interfaces /sbin/iptables -A INPUT -i lo -j ACCEPT /sbin/iptables -A OUTPUT -o lo -j ACCEPT # All TCP sessions should begin with SYN /sbin/iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Accept inbound TCP packets (Do this *before* adding the 'blocked' chain) /sbin/iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow the server's own IP to connect to itself /sbin/iptables -A INPUT -i eth0 -s 208.xxx.xxx.178 -j ACCEPT # Add the 'blocked' chain *after* we've accepted established/related connections # so we remain efficient and only evaluate new/inbound connections /sbin/iptables -N BLOCKED /sbin/iptables -A INPUT -j BLOCKED # Accept inbound ICMP messages /sbin/iptables -A INPUT -p ICMP --icmp-type 8 -j ACCEPT /sbin/iptables -A INPUT -p ICMP --icmp-type 11 -j ACCEPT # ssh (private) /sbin/iptables -A INPUT -p tcp --dport 22 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # ftp (private) /sbin/iptables -A INPUT -p tcp --dport 21 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # www (public) /sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 443 -j ACCEPT # smtp (public) /sbin/iptables -A INPUT -p tcp --dport 25 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 2525 -j ACCEPT # pop (public) /sbin/iptables -A INPUT -p tcp --dport 110 -j ACCEPT # mysql (private) /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.184 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.196 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.251 -j ACCEPT Any ideas? Thanks in advance. :-)

    Read the article

  • Does likewise-open > version 5.4 contain CIFS support?

    - by Ben Andken
    I'm trying to get the CIFS server working in likewise-open. I've found this set of instructions and everything seems to work until I try to connect ([url]http://www.likewise.com/resources/documentation_library/manuals/cifs/likewise-cifs-smb-file-server-guide.html#id2765992):[/url] 1.6. Build and Configure a Standalone Likewise-CIFS Server This section demonstrates how to build and configure a standalone instance of Likewise-CIFS from the command line. The following procedure assumes that you want to set up Likewise-CIFS on a Linux server to share files with Windows computers in a network without Active Directory. This procedure also assumes you know how to build Linux applications from their source code and then install them. Download Likewise-CIFS from its open source git location: $ git clone git://git.likewiseopen.org/ Download, build, and install the following tools. The tools listed are known to work, but earlier or later versions might work as well. Also, instead of downloading the tools, you might be able to install them on your platform with apt-get or some other means. http://ftp.gnu.org/gnu/autoconf/autoconf-2.65.tar.gz http://ftp.gnu.org/gnu/automake/automake-1.9.6.tar.gz http://ftp.gnu.org/gnu/libtool/libtool-2.2.6a.tar.gz http://pkgconfig.freedesktop.org/releases/pkg-config-0.23.tar.gz gcc --version 3.x or greater Build Likewise-CIFS: $ cd likewise-open $ build/mkcomp --debug all Install Likewise-CIFS: $ sudo su $ cd staging/install-root $ tar cf - . | (cd / && tar xvf -) Make sure Samba is not running: $ /etc/init.d/smb stop Make sure SELinux is either disabled or set to permissive. Make sure the ports required by Likewise are open. For a list of ports that Likewise uses, see the Likewise Open Installation and Administration Guide. Configure Likewise Open: $ /etc/init.d/lwsmd start $ for i in /etc/likewise/*.reg; do /opt/likewise/bin/lwregshell upgrade $i; done $ /etc/init.d/lwsmd stop $ /etc/init.d/lwsmd start $ /opt/likewise/bin/lwsm start srvsvc $ /opt/likewise/bin/domainjoin-cli configure --enable nsswitch Add a user account to the local Likewise provider database. In the following example, substitute the account name that you want for newuser. $ /opt/likewise/bin/lw-add-user --home /home/newuser --shell /bin/bash newuser Successfully added user newuser Enable the user and set the password: $ /opt/likewise/bin/lw-mod-user --enable-user --set-password newuser New Password: ********** Successfully modified user newuser Look up new user's identity as follows. Substitute the value from the command hostname -s for the hostname. Keep in mind that Likewise truncates a hostname longer than 15 characters to the first 15 characters of the string. % id hostname\\newuser uid=2000(HOSTNAME\newuser) gid=1800(HOSTNAME\Likewise Users) groups=1800(HOSTNAME\Likewise Users) context=system_u:system_r:unconfined_t:s0 Make a CIFS directory for the user: mkdir /lwcifs/newuser chown 2000:1800 /lwcifs/newuser From a Windows computer, map the Likewise-CIFS drive share: Computer->Map Network Drive... Folder: \\IP_hostname\c$ Click "Finish" Username: hostname\newuser Password: user_password The last step fails when I try to connect. I've tried with Windows XP Pro and Windows 7 Pro. The rest of the directions only appear to work for version 5.4 (the one that shipped with 10.04). For 12.04, version 6.1 is the only one available and it doesn't appear to have the srvsvc module mentioned in these instructions. Is CIFS support dropped in the 6.1 version of likewise-open?

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >